“Issues are risks that have already occurred” is the standard view of the difference between what is popularly called “risk” and “issue”. But, that’s a superficial difference that does not take into account a major attribute of “risk”: objectives. I had a recent discussion with a risk professional that went something like this: Me: “I need to add ‘X’ to the risk register. It’s a month until and the ‘X’ is nonexistent.” RM: “Wait, it’s not done? We don’t have it?
”… and of course we have set transitive trust between different sources of identity” said he with a glee. That’s when I knew for certain I was not speaking with someone that has had a long and successful track record of setting up identity management in an enterprise environment. Information security practitioners will often roll out the old and tired “transitive trust is bad” adage. Unfortunately the line is parroted too often without understanding why transitive trust is bad. ###Trust and related terms###
In the grand gesture of protecting public wellfare Microsoft exposed just how fragile the internet really is when a large organisation decides to use lawfare. All that’s needed is a pliable judge. This isn’t Microsoft’s first such grand gesture or use of lawfare, or using law as a weapon of conflict, nor is it likely to be the last. But it certainly proves the point of governments and organisations outside US that are calling for a multi-party governance of the internet.
Most businesses, most boards, don’t spend a lot of time thinking about uncertainty. In fact, they are terrified of doing so. The quote is from a good article in Strategic Risk Global about the value of risk management and why many risk managers can’t seem to make a difference in the perception of what they do for the organisation. [T]o create a more effective relationship between the risk function and the board, risk managers must stand up and show their bosses that they are not mere insurance buyers, as some senior leaders perceive them to be.
I’m catching up on my reading and one of the books I’m often going to for quick references is Charles Yoe’s “Principles of Risk Analysis”. There is a great chapter in Morgan D. Jones’s (1998) book The Thinker’s Toolkit. It is called “Thinking about Thinking,” and its primary thesis is that the human mind is not analytical by nature. He explores the fallibility of human reasoning and suggests that the best remedy for the mind’s ineffectiveness is to impose some structure on the way we think.
If you are told that you are WEIRD don’t take it as an offence. It likely means that you belong to about 12% of the global population that is Western, Educated, Industrialised, Rich, and Democratic *. Good as it may sound, it also puts you in the disadvantage when dealing with people from different cultural backgrounds. Problem reliance on studies that were done solely with WEIRD participants is that it skews the results and, worst of all, assumes certain cultural background in the decision makers:
If Apple followed the ‘wisdom of the crowds’ in 2006-2007 they’d never made an iPhone. If smart CISOs paid too much attention to the article in the Information Risk Leadership Council’s latest article they’d be in as much trouble as they purportedly are right now. There is a lot wrong with CISOs that put all their hope and budget in prevention, but the word itself is definitely not the problem. Nor is the solution that CEB IRLC (Executive Board’s Information Risk Leadership Council) advocated - although they just followed the lead by NIST.
Ian Grigg’s Financial Cryptography blog (FC) is one of the best sources on (alternative) payment systems. In terms of calling out risks and issues with the Bitcoin currency and market, Ian Grigg’s papers have been a hit and a miss. Bitcoin and Gresham’s Law has been thoroughly beaten by the Bitcoin mining crowd that has kept up to date with technology advances and didn’t focus solely on the potential economic issues of the emerging market. Despite the technology making the main point of the paper moot, it is still a good paper to read.
… you are committing a cardinal risk management sin. Of course that doesn’t stop people from continuing to do qualitative risk assessments, and there’s absolutely nothing wrong with that so long as there is no comparison between the risks. If you use qualitative risk assessment you cannot compare assessed risks. The reason for that lies in the ordinal scale that is typically used: The example above is exaggerated slightly to prove a point: whilst you would generally expect a value of 4 to be double that of 2, this doesn’t work once you start using purely ordinal scales.
It must be just me, but every time there’s a need to present a complex topic to the executives or business leadership (topic for another musing, methinks) I get the typical looks of “oh no, he’s going to get all lectury again”. And it’s true, I prefer to present complex topics as complex, even if the style of presentation makes them approachable. There’s no way to dumb down something that’s complex without: also sending the message that sure, they may be leaders of the organisation, people that we entrust to make the right decisions, but hey, let’s not try to present them something that’s not so simple that a 5th grader could solve or they’re end up in foetal position on the floor begging to make it go away; quite decidedly making the whole organisation poorer for the experience and less equipped to make the right calls because we, the experts, decided that only we should hold the knowledge; and actually making ourselves poorer for the experience, because when we start dumbing down, as opposed to making approachable, complex topics we also deny ourselves the opportunity to challenge our own knowledge of the topic.