PDA

View Full Version : Risk Intelligence


Elliott Sidewater
06-21-2011, 04:18 PM
I work in the Defense industry and ran across this extremely interesting article about "Risk Intelligence" . The personality traits associated with higher risk intelligence are known, and presented. I think there's quite a bit for us to think about here, so with no further ado, here it is (the article appears in the June 2011 issue of C4ISR Journal):

Which one are you?
The intelligence community needs fewer ‘hedgehogs’ and more ‘foxes’ — analysts and decision-makers with a knack for making accurate predictions. British behavioral scientist Dylan Evans explains.
June 01, 2011
In February, the New York Times reported that President Barack Obama had criticized American spy agencies for failing to predict the spreading unrest in the Middle East. This was nothing new; intelligence officials have long had to endure the wrath of American presidents, who often blame them for misjudging the events of the day. Nevertheless, Obama’s comments do raise an interesting question: How can intelligence analysts make better predictions?
It’s a topical question; just weeks after the Times reported Obama’s criticisms, researchers at MITRE, a federally funded organization that conducts research and development for the Defense and Homeland Security departments, began recruiting volunteers for a multiyear, Web-based study of people’s ability to predict world events. Sponsored by the Intelligence Advanced Research Projects Activity, the Forecasting World Events Project aims to discover whether some kinds of personality are better than others at making accurate predictions. Project organizers are recruiting a diverse panel of participants to offer predictions about events and trends in international relations, social and cultural change, business and economics, public health, and science and technology.
Previous research was not encouraging. A famous study by the American psychologist Philip Tetlock asked 284 people who made their living “commenting or offering advice on political and economic trends” to estimate the probability of future events in both their areas of specialization and in areas about which they claimed no expertise. Over the course of 20 years, Tetlock asked them to make a total of 82,361 forecasts. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? And so on.
Tetlock put most of the forecasting questions into a “three possible futures” form, in which three alternative outcomes were presented: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). The results were embarrassing. The experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes. Dart-throwing monkeys would have done better.
Furthermore, the pundits were not significantly better at forecasting events in their area of expertise than at assessing the likelihood of events outside their field of study. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” Tetlock observed. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals — distinguished political scientists, area study specialists, economists, and so on — are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecasters, the lower their predictive acumen seemed to be. “Experts in demand,” Tetlock noted, “were more overconfident than their colleagues who eked out existences far from the limelight.”
Yet all is not lost. Not all experts are equally bad. Some, in fact, are surprisingly good, and their uncanny accuracy suggests that there may be a special kind of intelligence for thinking about risk and uncertainty which, given the right conditions, can be improved. I call it “risk intelligence.”


Measuring risk intelligence
Few psychologists have noticed risk intelligence because it lurks in a ragtag bunch of people they rarely bother studying, such as horse handicappers and U.S. weather forecasters.
Studies have shown that U.S. weather forecasters, in particular, have high levels of risk intelligence. Understanding why they are so good may offer clues as to how risk intelligence can be improved in others. Sarah Lichtenstein, a leading scholar in the field of judgment and decision-making, speculates that several factors favor the weather forecasters. First, they have been expressing their forecasts in terms of numerical probability estimates for many years and, as a result, are better at it. British forecasters, by contrast, use words, not numbers.
Second, the task for weather forecasters is repetitive. The same set of questions (“Will it rain?” “Will it freeze?” etc.) has to be answered over and over again.
And finally, the feedback for weather forecasters is well-defined and promptly received.
These three factors were the cornerstones of an innovative training program introduced by senior executives at Royal Dutch/Shell in the 1970s. The executives noticed that newly hired geologists were far too confident when estimating the chances of finding oil. The geologists might estimate the likelihood of an oil strike in a given region at 40 percent, but when 10 wells were actually drilled there, only one or two would produce. This overconfidence cost Shell millions of dry-well dollars.
These judgment flaws puzzled the senior executives, since the geologists had excellent qualifications. The problem lay not with their primary knowledge but with what psychologists call “metacognition.” Experts often think they know more than they really do.
Shell tackled the problem by implementing an original training program. The geologists were given details of previous explorations and asked to provide numerical estimates of the chances of finding oil in each case. Then they were given the actual results. The training worked. By the end of the program, the geologists had much higher risk intelligence. Now, when they estimated that there was a 40 percent chance of finding the black stuff in a given region, four out of 10 wells drilled would strike oil.
Foxes versus hedgehogs
What if something similar were to be introduced in the intelligence agencies and the armed forces? Here’s how it would work: When forecasting world events and emerging security threats, intelligence analysts would be required to provide numerical probability estimates. Or, to take another example, commanders could be required to estimate the probability of destroying various targets or achieving other specified objectives when planning tactical operations. Then, as the situation developed, the accuracy of those estimates could be quantified by means of calibration tests and the results fed back to the analysts and commanders.
Risk intelligence testing could also be used when recruiting and selecting personnel in the first place. The armed forces make extensive use of aptitude testing, and it would be easy to incorporate a simple test of risk intelligence. In the absence of direct measures, existing data on personality could be used as a proxy, since high risk intelligence is favored by some personality traits and hindered by others. For example, one 2004 study found an association between poor risk intelligence and narcissism, while another found that extroverts also tend to have lower than average risk intelligence. More recently, two researchers at the Institut Européen d’Administration des Affaires, an international graduate business school with campuses in France, Abu Dhabi and Singapore, investigated whether risk intelligence was linked with Machiavellianism.
According to the psychologists who developed the first formal scale to measure this trait, people who score highly on tests of Machiavellianism “manipulate more, win more, are persuaded less, [and] persuade others more.” In 2010, Kriti Jain and Neil Bearden asked several hundred people to estimate the probability of each team in that year’s FIFA World Cup making it to the quarter-finals, semifinals, finals and of winning the tournament. They also measured the participants’ Machiavellianism by means of a test that asked them to rate how strongly they agreed with statements such as “I believe that lying is necessary to maintain a competitive advantage over others,” and “If I show any weakness at work, people will take advantage of it.” Those who scored higher on this test tended to make poorer predictions, which suggests that Machs (as those with high levels of Machiavellianism are known in the literature) tend to have lower than average risk intelligence.
This research suggests that if intelligence agencies and the armed forces test different personality traits of new applicants, they might be able to recruit analysts with better risk intelligence. Tetlock’s study suggests that risk intelligence is not strongly related to education or experience, but is marked by a particular style of thinking. Those with high risk intelligence tend to have fewer preconceptions, and draw information and ideas from a wider range of sources. They are also better able to admit they are wrong when they make a mistake and more likely to perceive the world as complex and uncertain.
Borrrowing a metaphor from philosopher Isaiah Berlin, Tetlock calls those with this style of thinking “foxes” and contrasts them with the risk-stupid “hedgehogs.”
Hedgehogs “know one big thing,” and they are constantly applying that knowledge to new domains, while displaying “bristly impatience with those who ‘do not get it,’” Tetlock writes. Foxes know lots of small things, and they see prediction as the “stitching together” of diverse sources of information.
I am currently investigating the possibility of correlations between risk intelligence and other psychological constructs such as the need for closure. The need for closure reflects the desire for an answer to a question. When it becomes overwhelming, any answer, even a wrong one, is preferable to remaining in a state of confusion and ambiguity. Pulling in the other direction is “the need to avoid closure.” When this force becomes overwhelming, no answer is found satisfactory, and the person remains so open-minded as to be unable to form any opinion at all.
People differ in the extent to which they are swayed by these opposing forces, and these differences can be measured by means of a test that asks how strongly you agree or disagree with statements such as these:
• I feel irritable when one person disagrees with what everyone else in a group believes.
• I hate to change my plans at the last minute.
• It’s annoying to listen to someone who cannot seem to make up his or her mind.
• I’d rather know bad news than stay in a state of uncertainty.
Agreement with these statements indicates a greater need for closure, whereas agreement with the following statements suggests a greater need to avoid closure.
• Even after I’ve made up my mind about something, I am always eager to consider a different opinion.
• I like to have friends who are unpredictable.
• When I go shopping, I have difficulty deciding exactly what it is that I want.
• My personal space is usually messy and disorganized.
My hunch is that, in a person with high risk intelligence, these two opposing forces are so evenly matched as to cancel each other out, leaving the work of judgment to proceed entirely on the basis of rational calculation. In most people, however, one force will typically be stronger than the other, and as a result their probability estimates will be systematically biased in one direction or another.
Even if the suggestions proposed here were taken up and intelligence agencies started testing new recruits for risk intelligence and correlated personality traits, it wouldn’t stop politicians passing the buck and blaming spies for what are really political mistakes. The best illustration of this phenomenon remains the intelligence failures surrounding the decision to invade Iraq.
In hindsight, it seems that there were only a few sources that told Western intelligence operatives that Saddam Hussein had weapons of mass destruction, but they were particularly vociferous. The most convincing, it seems, was an Iraqi defector code-named Curveball by the German and American intelligence officers who dealt with him. In a series of meetings during 2000, Curveball told a German agent identified as “Dr. Paul” that Saddam Hussein possessed mobile biological weapons labs.
A decade later, Curveball confessed that his tales of WMD were lies. His real name was Rafid Alwan al-Janabi, according to an investigation by the staff of the television news show “60 Minutes.” One reason why then-Secretary of State Colin Powell came to rely so heavily on Janabi’s tales, despite the growing doubts about Janabi’s credibility among intelligence officers, may be the fact that indications of uncertainty and other caveats tend to be lost as rumors travel along the chain of whispers. Experiments have shown that when people pass on information, they tend to convey the gist without indicating how strongly they believe it. As a result, a piece of gossip hedged with expressions of disbelief morphs into a hard fact as it is passed on from person to person. The initial degree of uncertainty is “lost in transmission.”
Improving the risk intelligence of intelligence analysts may be a solvable problem. Ensuring that the politicians remain aware of the uncertainties may not.
Dylan Evans is a lecturer in behavioral science at the School of Medicine at University College Cork and founder of the small company Projection Point in Cork, Ireland. His company seeks to improve the ability of corporate decision makers to predict outcomes.

Edward DeVere
06-21-2011, 06:01 PM
Interesting article. Thanks for posting it.

Mike A
06-21-2011, 07:52 PM
Very interesting article and a topic very relevant to many horse race bettors who aspire to become successful I would think.

Here are a few thoughts:

It is the nature of beatable gambling games, and it is a fact that horse race betting is one of them, that they allow for the establishment of a hierarchical framework of values and priorities in which real risk can be made virtually non-existent. Indeed it is a commonplace among successful bettors that this is practically the essence of professionalism.

Whatever inborn and/or developable general aptitudes one could speculate about (and I will address this a little more, later) with regard to successful vs unsuccessful bettors, there is a learnable skillset specific to horse race handicapping, and betting. And to be absolutely clear, the skillset I'm referring to for the sake of THIS part of the post, is only the one involving the particulars of the game: not emotional maturity and other personal traits nonspecific to the particular game of horse race betting.

The most important thing is to attain this skillset. (How rarefied a total skillset it may take to achieve high levels of success is beside the point here.) (And "skillset" is not meant at all to imply that there is one, rigidly formulaic approach to success; hence "hierarchical framework", which respects the particular nature of the game, as I wrote above.)

This skillset involves handicapping skill and money management/betting skill. (And again, which of these is really more important IS beside the point here.)

Whether or not there does tend to be either a learned and/or inborn predisposition toward high "risk intelligence" in successful horse players I'm not sure. I'd say it's somewhat likely, however mostly because I would think it would tend to function as an attribute that would make one more likely than the average person to get involved in the game to begin with, as the "HRI" predisposition would attract them to a game which almost all beginners perceive (wrongly) is inherently "risky".

But I don't think it necessarily plays much of a roll in keeping a player successful. Because given a sufficiently developed level of skill in handicapping, risk mostly enters in when one is flirting with risk of ruin, with regard to either typical bet size, or fluctuating bet size in relation to bankroll, and/or the bankroll itself not being "disposable".

It still could very well be true that a more or less ingrained low risk intelligence could function within a given person to influence that person to perform less well even within reasonable bet size/bankroll proportion parameters, than they would if they had a higher level of aptitude in that area.

Human beings are complicated, and each is unique, although we all have the same fundamental nature. So although I am cirmcumspect when it comes to personality typologies which emphasize inborn aptitude, it's not because I don't think inborn aptitudes exist. --Here's the crux, from a practical standpoint:

With regard to each individual person wondering about his inborn capacities, I think maintaining a healthy agnosticism via the recognition that his own subjectivity can prove to be a self-fulfilling prophecy with regard to his perceived limitations, is the way to go, and therefore each of us should work on developing ouselves...our character...our discipline. Our wisdom, our virtues.

On Spec
06-22-2011, 01:02 AM
Terrific post and ideas. Thanks for posting this -- it seems to validate my own approach to this game.

Correct me if I missed the point of the essay, but this risk intelligence seems to boil down to the trait of "suspension of judgment." Meaning -- If you're going to make good predictions, don't be too quick to decide something, particularly if you don't have an overwhelming sense that there is no alternative.

Handicapping takes so darn long to get good at (and I'll set as the benchmark simply beating the take, not necessarily making a profit) that you have to keep open the idea (suspend judgment) of the game being beatable for an enormous period of time when your own betting results are telling you the opposite.

My guess is that the top players have all gone through this long process of just believing they could beat this thing, even when they weren't yet beating it. But they stayed open to using new information, and used the continual feedback of the betting window, until they came upon the set of insights that led to consistently successful handicapping.

I think it comes down to the game being more fascinating than the money. If all you want is the money, there are lots of ways to accomplish that, and lots of encouragement to stop betting horse races. But if what you want is to be consistently successful at the races, there is no other alternative, and nowhere else to turn. Money is just how we keep score.

Could it be that if all you want is the money, the mystery of the game will never reveal itself?

But if you find the game fascinating, what other choice do you have?

</delusions of Yoda-hood>