The US Government’s March 2009 paper, ‘A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis’, which forms Week 5, Day 2 of the CORTEX MBAnalytics program, nicely complements Harvard Business Review’s recent essay, ‘The Big Idea: Before You Make That Big Decision…’, by Daniel Kahneman, Dan Lovallo, and Oliver Sibony (requires registration, but well worth it). Both pieces set out to counteract cognitive biases—the primer in the context of sense making and analysis, and the HBR essay in the context of decision making. Each provides practical strategies for systemising skepticism.
Business cases for analytics often focus on applications which automate decisions by delegating them to algorithms. One of Analyst First’s consistent contentions has been that, whilst analytics certainly can automate low level operational decisions, it makes others harder. Analytics enables higher value decisions to be made. As the Tradecraft Primer puts it:
This primer highlights structured analytic techniques—some widely used in the private sector and academia, some unique to the intelligence profession. It is not a comprehensive overview of how intelligence officers conduct analysis. Rather, the primer highlights how structured analytic techniques can help one challenge judgments, identify mental mindsets, stimulate creativity, and manage uncertainty. In short, incorporating regular use of techniques such as these can enable one to structure thinking for wrestling with difficult questions.
The techniques covered fall into three groups:
Diagnostic techniques suited for “making analytic arguments, assumptions, or intelligence gaps more transparent”:
- Key Assumptions Check
- Quality of Information Check
- Indicators or Signposts of Change
- Analysis of Competing Hypotheses (ACH)
Contrarian techniques designed to challenge status quo thinking:
- Devil’s Advocacy
- Team A/Team B
- High-Impact/Low-Probability Analysis
- “What If?” Analysis
Imaginative thinking techniques aimed at “developing new insights, different perspectives and/or develop alternative outcomes”:
- Outside-In Thinking
- Red Team Analysis
- Alternative Futures Analysis
Each of these are practically described and illustrated with case studies. The final section, ‘Strategies For Using Structured Analytic Techniques’, locates them along a stylised analytic project timeline:
In ‘The Big Idea: Before You Make That Big Decision…’, Kahneman et al. address “decisions that are both important and recurring, and so justify a formal process”, in other words, strategic decisions. The typical case involves an executive making a decision on the basis of recommendations provided by a subordinate team. Overcoming cognitive biases in this context, the paper reports, has been shown to pay off in the form of better decisions. But although we may each be aware that we are prone to cognitive biases, this knowledge alone is not helpful, because as individuals we are unable to neutralise our own biases. In the organisational context, however, there is strength in numbers:
[M]ost decisions are influenced by many people, and… decision makers can turn their ability to spot biases in others’ thinking to their own advantage. We may not be able to control our own intuition, but we can apply rational thought to detect others’ faulty intuition and improve their judgment.
To do this, Kahneman et al. propose a “systematic review of the recommendation process” consisting of a twelve point checklist of questions, each designed to counteract specific cognitive biases. Executives are encouraged to ask:
- Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommending team? (Self-interested Biases)
- Have the people making the recommendation fallen in love with it? (Affect Heuristic)
- Were there dissenting opinions within the recommending team? (Groupthink)
- Could the diagnosis of the situation be overly influenced by salient analogies? (Saliency Bias)
- Have credible alternatives been considered? (Confirmation Bias)
- If you had to make this decision again in a year, what information would you want, and can you get more of it now? (Availability Bias)
- Do you know where the numbers came from? (Anchoring Bias)
- Can you see a halo effect? (Halo Effect)
- Are the people making the recommendation overly attached to past decisions? (Sunk-Cost Fallacy, Endowment Effect)
- Is the base case overly optimistic? (Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect)
- Is the worst case bad enough? (Disaster Neglect)
- Is the recommending team overly cautious? (Loss Aversion)
The thinking behind each of these is elaborated and case study examples are provided.
Both papers are recommended in full. Analysts and decision makers may be accustomed to being data driven, but being rigorously and systematically skeptical is a broader discipline—welcoming a diversity of opinions, actively seeking and valuing disconfirmation, and being prepared to challenge accepted organisational wisdom.
Related Analyst First posts:
Week 1, Day 5 of the CORTEX MBAnalytics program includes Tom Davenport’s ‘Rethinking Knowledge Work: A Strategic Approach’ from the McKinsey Quarterly of January 2011. In the essay, Davenport argues that productivity software hasn’t boosted the productivity of “knowledge workers” to the extent hoped for given the outlays of the last two decades. The primary method employed over this period has been what he calls ‘free-access’: providing knowledge workers with tools and information and leaving it to them to work out what to do with them:
In this model, knowledge workers define and integrate their own information environments. The free-access approach has been particularly common among autonomous knowledge workers with high expertise: attorneys, investment bankers, marketers, product designers, professors, scientists, and senior executives, for example. Their work activities are seen as too variable or even idiosyncratic to be modeled or structured with a defined process.
This approach suits when there is uncertainty, ambiguity, and contingency, each of which work against predictability. The upside is the ability of humans to adapt to these. The downside is that autonomy doesn’t come for free. Workers will execute variably, some poorly. The lack of standardisation leads to duplication and other kinds of inefficiency. Precise performance measurement and management is also a challenge. Typical productivity metrics in the free-access domain are rough and high level if present at all, and there is a trade-off between additional measurement and ease of information access.
The alternative model Davenport terms ‘structured-provisioning’, in which tasks and deliverables are defined and knowledge workers slotted in. Typical examples are workflow or ‘case management’ systems, which integrate decision automation, content management, document management, business process management, and collaboration technologies:
Case management can create value whenever some degree of structure or process can be imposed upon information-intensive work. Until recently, structured-provision approaches have been applied mostly to lower-level information tasks that are repetitive, predictable, and thus easier to automate.
The upside is efficiency. The downsides are worker alienation and resistance, and detrimental business outcomes resulting from complexity and poor specification—bad mortgages, for example.
Davenport believes that businesses should increasingly “structure previously unstructured processes”. That is, that the free-access domain should be progressively structure-provisioned. He uses a 2 x 2 matrix to frame his argument. On the x-axis is ‘Complexity of work’, ranging from Routine across to Intepretation/judgement. On the y-axis is ‘Level of interdependence’, ranging from Individual actors up to Collaborative groups. The resulting knowledge work quadrants are:
- Transaction model (Routine x Individual actors)
- Expert model (Interpretation/judgement x Individual actors)
- Integration model (Routine x Collaborative groups)
- Collaboration model (Interpretation/judgement x Collaborative groups)
The Transaction model contains most existing structure-provisioning, and the Collaboration model—consisting of “Improvisational work”, being “Highly reliant on deep expertise across multiple functions”, and “Dependent on fluid deployment of flexible teams”—is inherently free-access. Davenport sees the Expert and Integration models, however, as open to further structured-provisioning.
Martin Ford’s book, The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, (free as a PDF download), further illuminates these trends. Ford identifies three categories of job vulnerable to displacement by technology:
- Hardware jobs, such as assembly line jobs, which become displaced by robotics—a process which is already well underway.
- Software jobs, such as radiology, which are first displaced by outsourcing, then by AI.
- Interface jobs, such as loan officers, which become displaced by telecommunications, digitisation, and data standardisation.
‘Rethinking Knowledge Work’ is an interesting change of direction for Davenport. His seminal ‘Competing on Analytics‘ essay, and the book that followed, profiled business effectiveness and adaptiveness powered by analytics. The arguments here, by contrast, are all about efficiencies.
[T]o date, high-end knowledge workers have largely remained free to use only the technology they personally find useful. It’s time to think about how to make them more productive by imposing a bit more structure. This combination of technology and structure, along with a bit of managerial discretion in applying them to knowledge work, may well produce a revolution in the jobs that cost and matter the most to contemporary organizations.
Given the vulnerability of so much knowledge work to displacement, it’s a good time to be an analyst. Business Analytics clearly lives in the “Expert model” quadrant. Further to that, Davenport sees it as playing a role in augmenting other expertise within that domain:
Expert jobs may also benefit from “guided” data-mining and decision analysis applications for work involving quantitative data: software leads the expert through the analysis and interpretation of data.
This further validates Analyst First principles, namely our insistence on the importance of human over electronic infrastructure, our conception of Business Analytics as an intelligence rather than IT function, and our focus on strategic in preference to operational analytics.
Related Analyst First posts:
An explorer returning from new territory may be able to describe the path he took. He may also have drawn a map. A path is more prescriptive than a map. A map is more descriptive than a path—not in terms of detail, necessarily, but in terms of the flexibility it affords. A path presumes a more specific purpose than a map. A map provides more context than a path. To have a map when following a path is to take a more contextually rich journey, and to have options. But options are taxing and scenery distracting when the the goal is simply the most direct route.
A data analyst exploring new data, or newly exploring old data, can create similar objects. He may create a path, enabling others to take the same journey, or saving them from needing to. He may also create a new map, or enlarge, extend, enrich or correct an existing map. In doing so he may uncover new paths, or alternative paths, or territory through which no paths yet exist. Sensemaking and decision support are map-making activities. Decision automation is a path-taking activity. Simulation builds a richer map. Optimisation uses it to find the best path.
The New Yorker recently ran a fascinating profile of Ray Dalio, the founder of Bridgewater Associates, the world’s richest hedge fund. From an Analyst First point of view the piece offers a window into the human infrastructure of an arms race environment. Bridgewater is a culture committed to making and learning from its mistakes:
“Our greatest power is that we know that we don’t know and we are open to being wrong and learning.”
In his Principles, Dalio declares that acknowledging errors, studying them, and learning from them is the key to success. He writes, “Pain + Reflection = Progress.” Bridgewater puts this equation into action by organizing lengthy assessment sessions, in which employees must discuss their mistakes.
“What we’re trying to have is a place where there are no ego barriers, no emotional reactions to mistakes. . . . If we could eliminate all those reactions, we’d learn so much faster.”
Part of Bridgewater’s human infrastructure is a commitment to radical transparency. Some of its key items of electronic infrastructure are therefore video and tape recorders:
Like virtually all meetings at Bridgewater, this one was taped. Dalio says that the tapes—some audio, some video—provide an objective record of what has been said; they can be used for training purposes, and they allow Bridgewater’s employees to keep up with what is going on at the firm, including his discussions with senior colleagues. “They get to see all of my mistakes,” Dalio told me.
One rule of radical transparency is that Bridgewater employees refrain from saying behind a person’s back anything that they wouldn’t say to his face.
This means that management’s misgivings about a particular employee’s suitability for promotion are discussed openly with him, and recorded. (He doesn’t get the promotion.)
James Comey, the firm’s top lawyer… [took] a while to get used to dealing with Dalio. “When Ray sent me an e-mail saying, ‘I think what you said today doesn’t make sense,’ I tended to think, What does he really mean? Where’s he coming from? And what is my play? Who are my allies? All of the things you think about in the outside world. It took me three months to realize that when Ray says, ‘I think you are wrong,’ he really means ‘I think you are wrong.’ He’s not trying to provoke you, or anything else.”
“What is a typical organization?” [Dalio] asked me one day. “A typical organization is one where people are walking around saying, ‘This is stupid, this doesn’t make sense,’ behind each other’s backs.”
The article is also illuminating in its discussion of Bridgewater’s analysis and trading philosophies, which reflect its acceptance of uncertainty:
[T]he Pure Alpha fund typically has in place about thirty or forty different trades. “I’m always trying to figure out my probability of knowing,” Dalio said. “Given that I’m never sure, I don’t want to have any concentrated bets.” Such thinking runs counter to the conventional wisdom in the hedge-fund industry, which is that the only way to score big is to bet the house.
Many economists start at the top and work down. They look at aggregate statistics—inflation, unemployment, the money supply—and figure out what the numbers mean for particular industries, such as autos or tech. Dalio does things the other way around. In any market that interests him, he identifies the buyers and sellers, estimates how much they are likely to demand and supply, and then looks at whether his findings are already reflected in the market price. If not, there may be money to be made.
Bridgewater is more a qualitative than a quantitative trading fund. In this context, its decision support systems are interesting:
To guide its investments, Bridgewater has put together hundreds of “decision rules.” These are the financial analogue of Dalio’s Principles. He used to write them down and keep them in a ring binder. Today, they are encoded in Bridgewater’s computers. Some of these indicators are very general. One of them says that if inflation-adjusted interest rates decline in a given country, its currency is likely to decline. Others are more specific. One says that, over the long run, the price of gold approximates the total amount of money in circulation divided by the size of the gold stock. If the market price of gold moves a long way from this level, it may indicate a buying or selling opportunity.
In any given market, Bridgewater may have a dozen or more different indicators. However, even when most or all of the indicators are pointing in a certain direction, Dalio doesn’t rely solely on software. Unless he and Jensen and Prince agree that a certain trade makes sense, the firm doesn’t make it. While this inevitably introduces an element of human judgment to the investment process, Dalio insists it is still driven by the rules-based framework he has built up over thirty years. “When I’m thinking, ‘What is going on today?,’ I also need to make the connection to ‘How does what is happening today fit into our framework for making this decision?’ ’’ he said. Ultimately, he says, it is the commitment to systematic analysis and systematic investment that distinguishes Bridgewater from other hedge funds.
In other words, Bridgewater runs on human judgement augmented by decision support, not decision automation. It recognises that decision support leads to higher value decisions but in practice makes decision making harder, not easier. As a recent Analyst First post argued, new information is not always “actionable”:
Comey was initially struck by how long it took Bridgewater to make decisions, because of the ceaseless internal debates. “I said, ‘Lordy, we have to put tops on bottoms. Let’s get something done,’ ” Comey recalled. But he added, laughing, “The mind control is working. I’ve come to believe that all the probing actually reduces inefficiencies over the long run, because it prevents bad decisions from being made.”
Dalio on ownership:
“I don’t want Bridgewater to go public or have it controlled by anybody outside the firm,” he said. “I think people who do that tend to mess up the firm.”
On the nature of competition in financial markets:
[Dalio] regards it as self-evident that all social systems obey nature’s laws, and that individual participants get rewarded or punished according to how far they operate in harmony with those laws. He views the financial markets as simply another social system, which determines payoffs and punishments in a like manner. “You have to be accurate,” he says. “Otherwise, you are going to pay. Alpha is zero sum. In order to earn more than the market return, you have to take money from somebody else.
And finally, on the global economy:
Dalio believes that some heavily indebted countries, including the United States, will eventually opt for printing money as a way to deal with their debts, which will lead to a collapse in their currency and in their bond markets. “There hasn’t been a case in history where they haven’t eventually printed money and devalued their currency,” he said. Other developed countries, particularly those tied to the euro and thus to the European Central Bank, don’t have the option of printing money and are destined to undergo “classic depressions”.
Related Analyst First posts:
Tim Van Gelder of Austhink has reposted his classic article on the shortcomings of an IT based model of BI. He identifies a vital, irreplaceably human element missing from the model. The cartoon is pretty good too.
Some gems of clear thinking:
there’s something missing from this picture. In non-trivial or non-routine cases, you can’t (or shouldn’t) skip directly from insight to action. Insight, in TE’s description of it, appears to be a richer, more synthesized, more accessible form of information; it is what you’ve got when you’ve used their tools to “look around and drill into the data and report it out.” Between insight, in this sense, and action there have to be processes of assessing, deliberating, integrating, weighing, and choosing – in short, there has to be decision.
Decision making is the crucial bridge between information (even quality information, i.e. insight) and action.
And the money shot:
Before any action, you’d have to decide which action was most appropriate in the circumstances. The insight we obtained (and no doubt numerous others we could get from our wonderful BI suite) would surely help. But insights, no matter how penetrating or how numerous, don’t dictate any particular decision. The decision is generally made through a deliberative, usually collaborative process in which insights are translated into arguments and arguments are assessed and weighed.
In my experience working for software vendors the answer to this has always been ‘yes and no’, but the one sure thing is that everyone uses Excel. Spreadsheets are the most pervasive and effective decision support tools. No organisation doesn’t use them, and it’s a safe bet that this will always be the case. No amount of data warehousing will ever be able to provide decision makers with all the information they need. To the extent that it can, those decisions can be automated. Decisions invariably require new data. That new data will be either unanticipatable, or tacit, or both. Spreadsheets are unbeatable for ad hoc data analysis and turning tacit data into explicit data.
Evelson poses his questions in the context of (presumably Forrester) research into BI Pricing, which he says is:
[S]howing a broad range of transparency (or non transparency) from BI vendors themselves. Some vendors welcomed our research RFI and are happily providing all the info we requested. Some are less transparent and are insisting that we only publish price ranges or comparative analysis (who’s more/less expensive) without showing their exact quotes. Yet, some others have declined to participate.
That doesn’t surprise me. Wide price ranges are both inevitable and understandable. Software businesses, particularly in growth markets like BI, concentrate more on increasing revenue than on managing to the bottom line. Costs just don’t matter as much. They’re also indirect – software being an information product. Part of the software sales process is working out what the prospect is willing to pay, which is basically what they’ll end up paying, which will vary from customer to customer.
Related Analyst First posts:
Tried and true best practices for enterprise software development and support just don’t work for business intelligence (BI). Earlier-generation BI support centers — organized along the same lines as support centers for all other enterprise software — fall short when it comes to taking BI’s peculiarities into account. These unique BI requirements include less reliance on the traditional software development life cycle (SDLC) and project planning and more emphasis on reacting to the constant change of business requirements.
That is from Forrester Research’s Agile Business Intelligence Solution Centers Are More Than Just Competency Centers report, just released. The full version of the report is paid (USD 499) but a free overview from its two lead authors, Boris Evelson and Rob Karel, is here. The case against the SDLC / project approach is summarised thus:
Earlier-generation BI support organizations are less than effective because they often:
- Put IT in charge
- Remain IT-centric
- Continue to be mostly project-based
- Focus too much on functional reporting capabilities but ignore the data
In response, Forrester advocates a a ‘flexible and agile’ approach to BI, and establishing “BI on BI” to explicitly learn from successes and failures.
This echoes much of what Analyst First advocates, namely that:
- Analytics is not IT
- IT risk management practices hamper Business Analytics initiatives
- It is prudent to assume bad data
- A Lean Startup approach makes more sense
- A good deal of Business Analytics is bespoke
Note that Forrester is making these recommendations at the Business Intelligence end of the Business Analytics spectrum. It’s arguing that, even where Business Analytics lives in an operational, repeatable, systematised, automated, decision automation context:
[No] repository can fully substitute for personal, qualitative knowledge; that’s often more art than science. Therefore, staff the BICC/COE [Business Intelligence Competency Centre / Centre Of Excellence] with individuals whose primary responsibility is to disseminate such knowledge above and beyond what’s available in the repository.
In other words, the human infrastructure is critical, and investments in electronic infrastructure which ignore it will be unsuccessful.
Related Analyst First posts:
Broadly speaking all Business Analytics serves one of two goals: decision support or decision automation. One way to idealise these is as either reports (decision support) or algorithms (decision automation).
Algorithms reduce the need for humans to think. Picture the in-database credit scoring function embedded deep in your bank’s systems and firing thousands of times an hour. This kind of decision automation (or decision replacement) is a common operational analytics endpoint.
Reports, on the other hand, make decisions more difficult. The simplest decision support system is a coin toss, but a business relying only on heads and tails will not survive for long. Real decision support adds ambiguity, complexity, uncertainty, and necessitates human judgement. This makes decisions harder, not easier.
“Business Analytics” is the currently favoured term for the conjunction of Analytics and Business Intelligence. How are these two different?
One way to delineate between them is in terms of applications and methods. Business Intelligence is typically taken to cover, in no particular order:
- Query and Reporting
- Data Warehousing
- Planning and Budgeting
A similar list for Analytics would include:
- Data Mining / Machine Learning
- Predictive Modelling
- Clustering and Segmentation
- Unsupervised Learning
- Data Visualisation
- Time Series Analysis
- Social Network Analysis
- Principal Components Analysis
- Association Rules
- Factor Analysis
- Simulation and Optimisation
- Experimental Design
- Choice Modelling
- Data Envelopment Analysis
An alternative means of contrasting Business Intelligence and Analytics is by way of functional distinctions:
|Simpler problems||Harder problems|
|Provides context||Provides answers|
|Technology used to address scale||Technology used to address complexity|
|Hard-to-access data transformed into information||Hard-to-process data transformed into information|
|Computation used to restructure data||Computation used to derive data|
|Value defined primarily in terms of the timeliness of results||Value defined primarily in terms of the specificity of results|
|Deployment entails publication to a mass audience||Deployment entails either decision support or decision automation|
|Involves technical challenges||Involves conceptual challenges|
About usAnalyst First is a new approach to analytics, where tools take a far less important place than the people who perform, manage, request and envision analytics, while analytics is seen as a non-repetitive, exploratory and creative process where the outcome is not known at the start, and only a fraction of efforts are expected to result in success. This is in contrast with a common perception of analytics as IT and process.
Tags in a CloudAIPIO analyst first Analyst First Chapters analytics analytics is not IT arms race environments big data business analytics business intelligence cargo cults collective forecasting commodity and open source tools complexity data decision automation decision support educated buyer EMC-greenplum forecasting HBR holy trinity human infrastructure incentives intelligence model of analytics investing in data lean startup literacy management culture MBAnalytics operational analytics organisational-political considerations Philip Russom Philip Tetlock prediction markets presales R Robin Hanson Strategic Analytics tacit data TDWI Tom Davenport uncertainty uneducated buyer vendors why analyst first