“Appropriate Empowerment” is the third and final element of the Holy Trinity, the three essential characteristics of sponsors of successful analytics practices, covered in the current series of posts. Appropriate Understanding and Appropriate Incentive were covered previously.
As before, this is an examination of the success mode and failure modes of the element in question. What does Appropriate Empowerment (or just “Empowerment” for short) look like when it succeeds, and what happens when it fails, or other elements fail to support it ? The success mode of Empowerment considers the situation where all elements of the Trinity are in place, but focuses on the role played by Empowerment.
The Failure Mode of Empowerment is the situation where the sponsor possesses Understanding and Incentive, but lacks Empowerment. We explore this situation, along with possible remedies, before concluding with the Isolation Mode, the situation where Empowerment is present, but alone, with neither Understanding nor Incentive in place beside it.
The success mode of Empowerment is simple, yet essential. Empowerment is the least visible element of the Trinity, more notable in its absence. Where the Sponsor sees the need for something to be done to the benefit of the business through analytics, and has the right Incentive to make it happen, then Appropriate Empowerment simply means : it happens. There is no one who can overrule, block, derail or otherwise unhelpfully modify any analytics initiative that has been put into motion.
Understanding ensures that the sponsor identifies the right analytics initiative for the greatest benefit to the business, and takes into account all that is required to enable it. Incentive ensures that the Sponsor actually wants this to happen. Empowerment then is simple : the Sponsor is in a position to launch the initiative, and ensure that it proceeds to the correct conclusion. He is able to support it with all the resources it requires, and protect it from unhelpful stakeholders. He is also in place to ensure that recipients of analytics recommendations act on them if the process requires them to do so. Tyrannical ? Perhaps. Far-fetched ? Certainly. But this is the ideal, however out of reach it may be for (current) real-world large organisations.
Empowerment is thus quite simple. It is the ability to make things happen.
It is also an absence of unhelpful constraints. A Sponsor with the Holy Trinity is sufficiently empowered not to worry about unreasonable or ill-defined expectations of value before the initiative or function is ready. Empowerment ensures that the function is not subject to IT-style management practices, deterministic waterfall and project management approaches. His analytics function is lean, agile and experimental : free to learn, fail repeatedly (for a time), as required to continually reach insights of massive value and exploit them.
The failure mode sets in when a sponsor has all the best intentions in terms of Incentive, and is well versed in Understanding what an analytics function can do, and what it requires to achieve it, alongside budget and a mandate to create the analytics function. Unfortunately, he may well lack the power to act as Understanding and Incentive may compel him to.
Any dilution of empowerment invites unreasonable expectations born of poorer Understanding and Incentive. A sponsor beholden to other managers, stakeholders etc is subject to constraints, expectations and pressures that may prevent an agile, experimental approach. The cargo cult of analytics, “Analytics in a Box” solutions promoted by some vendors stand in opposition to the agile approach, and enjoy attention and support from far too many senior executives. The resulting analytics cargo cult, subscribed to by much of senior management, expects great value from analytics, but does not know how to define this value, or even to measure it. This very lack of clarity may be what imposes inappropriate deterministic project management frameworks such as PRINCE2, and other inappropriate business analysis and management oversight by people who have no idea what they are managing or why. in such situations, project managers may grab the first objective metric, however irrelevant or minor and focus on it as a box ticking exercise. The analytics function is then little more than an IT production line, creating something of indeterminate value to satisfy a management fad. A sponsor beholden to such powers cannot be said to be sufficiently empowered. Worse yet, ignorant or indifferent management may relegate the sponsor under the auspices of IT. Needless to say, this is not an ideal outcome.
One large pathology crippling Empowerment is the modern corporate stakeholder model. A committee of stakeholders is not a Sponsor, especially when enough members of that committee have far from perfect Incentive or Understanding, and perhaps far too much Empowerment. A committee can be on the whole more stupid, poorly Incentivised and disempowered than any one member. A Sponsor beholden to such a Committee is hardly empowered, and the Committee as Sponsor is a far from ideal scenario. The fact that this situation is reality in so many large organisations does not diminish the fact that it is utterly pathological.
In the ideal situation the Sponsor is beholden to no one with excessive power who is inadequate in the other two key characteristics. The ideal Sponsor is therefore the CEO, and better yet a manager / owner. Again, this is perhaps unrealistic, but still needs to be identified as the ideal, and any deviation from it analysed in terms of potential failure of Empowerment. It is also the reason that the most innovative, valuable and agile analytics exist in tech startups and not large “Enterprises” (in quotes because they are usually the very opposite of that word)
Not all pathologies of Empowerment concern levels of power above the Sponsor. Other pathologies of Empowerment are lateral. The most immediate lateral power issue is one with IT : too many IT functions find it their job to block analytics access to tools, especially open source tools that are otherwise readily available, free and powerful. They may prevent access to adequate, and otherwise cheap and readily available hardware and useful online services such as cloud computing. They are also known for starving the analytics function of data. Too many analytics functions are in a situation where the main expenditure of effort is building business cases for data, tools and hardware. A sponsor who knows this to be the case but cannot fix it is clearly not sufficiently empowered.
Lateral Empowerment is also an issue with “trigger pullers”, people whose job it is to act on the recommendations of operational analytics. The most striking case of this is a pathology i have seen in a multitude of organisations making use of predictive operational risk analytics. Predictive models provide lists of targets (eg revenue leakage, non-compliance, suspicious behaviour, fraud risk indicators etc). In all cases a human being is provided a list of targets generated by the predictive model. Ideally, this human being proceeds to manually investigate the targeted cases. Unfortunately, in most situations, these individuals do not understand or trust these predictive models. In my experience, many such individuals cannot conceive the very idea of the inference of a model from data. It would appear that there are whole cultures of people who cannot imagine such a thing as statistical induction. They naturally voice their displeasure and challenge, stall and undermine the process. Much of a Sponsors job seems to be the thankless, draining and often never ending task of “winning them over”. A sufficiently empowered Sponsor would, however, be in a different situation. When asked why these people should trust these models he would be able to answer “because if you do not, I will fire you and perhaps hire someone who does what they are told. Or replace you with a smart pattern matching algorithm”. Again, this is perhaps not realistic, and perhaps suggesting something that certain Public Sector Unions would consider on par with a crime against humanity – asking that people do their jobs. The whole issue of uncooperative “trigger pullers” was only raised to make a point about Appropriate Empowerment: if a Sponsor is not able to ensure that human components of an operational analytics value chain do cooperate and act as a part of the analytics value chain, there is a failure of Empowerment. Perhaps effective analytics sponsorship, as defined in this series is impossible in most organisations where employee non-compliance and stakeholding is a given.
A lack of Empowerment is however, far from the end of the world, and the relatively dystopian situation described above matches many existing analytics functions, particularly in government and quasi-government organisations. They still manage to survive, and add some value, although arguably but a fraction of what would be possible if only sponsors were more Empowered. These organisations have in fact found themselves innovating in a number of fronts, dealing with insufficient Empowerment, and in some cases developing methods of generating more of it.
One key solution to the problem of insufficient Empowerment is Separation from IT. As far as possible, as quickly as possible, it is important to establish a “sandpit environment”, separate from the main IT network, where new hardware may be added, and software loaded outside of IT governance. This is essential if appropriate computational power and open source tools are to be leveraged quickly and effectively.
Another part of the solution, and one that is even more fundamental is Stealth Mode. It is imperative that a new analytics function has the ability to learn, experiment, and fail in its early stages. Expensive budget items such as vendor tools create massive, thought ill-defined expectations. Expectation management is yet another reason to avoid expensive vendor software early in the creation of an analytics function.
Ideally, the function has a small crew of capable, flexible people, a small budget and access to data and open source tools. Also, the function has a main focus that is a well-defined, business as usual task such as reporting. Actual analytics can be done on the side, as a side project, and not announced until it yields results. These results can then be presented as wins to formalise and Empower the nascent analytics function. There may then be sufficent leverage to acquire more staff, create a sandpit environment and acquire data reliably.
As discussed previously, the most important element of the Trinity is Incentive. With Incentive alone, the Sponsor knows that their first task is to increase their Understanding. Some of this is reading/study, some of this is consultation with experts, and much of this is experience which can be obtained in stealth mode. Empowerment is important, but as we can see it comes third in importance.
Indeed, most capable analytics professionals find themselves working for under-empower sponsors. This is not ideal, but not a career-ending situation. Indeed, the struggle for further Empowerment of the Sponsor is the defacto KPI of most analytics functions, and many professionals find it as exhilarating as they may find it frustrating.
It remains to discuss the “Isolation mode” of Empowerment. What happens when the Sponsor has all the power, but no Understanding, and,lacks the right Incentive? Here ignorance conspires with either a lack of real enthusiasm for Analytics, or an entirely different agenda, and gives them a hefty cheque book. So, what can happen ? A storm of Cargo Cults, management fads and buzzwords. “Analytics”, having something to do with “data” and software must clearly be some kind of IT, best managed and bought by the CIO and best explained by people who sell software. And that’s how the wrong kinds of Vendors happen. Long sales lunches. Exciting pre-sales presentations. Use of the words “Enterprise”, “Innovation” and “Insight” by people who don’t have anything to do with any of them. “Case studies” of previous such exchanges in other high profile corporations, presented as success. People who may not really care what they are selling, sell to people who don’t really understanding or care what they are buying. Consultants, the “best practice”, “brand recognition” kind jump in. More money gets spent. Everybody involved wins, except the (theoretical and distant) shareholders, citizens and other ultimate beneficiaries of the business. Almost always, none of the parties is an actual owner of the business in question. Most owners are far more sensible than that.
So what happens after that? Software get installed. Systems get integrated. People get hired, maybe, as an afterthought to mind the (far more important) Machines. These people are likely software developers, data base managers and project managers. Maybe even a token statistician. Gannt charts get ticked. Bonuses get paid (at least on the vendor side). Conferences benefit from new “Best practice” case studies. The Vendor-Consulting complex marches on in all its dinosauric grandeur.
So Incentive and Understanding matter, and Empowerment on its own is not a great idea, however common this situation may be.
Enough of Cato Unbound’s What’s Wrong With Expert Predictions debate has now unfolded that it makes sense for me to offer some commentary. The discussion encompasses many predictive and decision-making subject areas and institutions – politics, economics, business, media punditry – but for the purposes of Analyst First I’m primarily interested in prediction in the context of organisations.
All the discussants agree that expert predictive track records are terrible, but they diverge in the degree to which they see this as problematic and in their recommendations as to what to do about it. The debate so far:
In their Lead Essay, Dan Gardner and Philip Tetlock present a puzzle:
Every year, corporations and governments spend staggering amounts of money on forecasting and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods.
They go on to provide an overview of Tetlock’s longitudinal study of experts, encompassing 28,000 predictions over a fifteen year period, which found that eclectic foxes outperform dogmatic hedgehogs, but that both are outperformed by extrapolation algorithms. They argue that we need to get better at accepting our limitations and to “give greater consideration to living with failure, uncertainty, and surprise”. They accordingly call for “decentralized decision-making and a proliferation of small-scale experimentation”.
In the Reaction Essay section, Robin Hanson addresses the puzzle of why forecasting remains so immune to accountability via – presumably easy to assemble – track records: “[s]urprising disinterest [he means uninterest] in forecasting accuracy could be explained either by its costs being higher, or its benefits being lower, than we expect.” His conclusion is that, even in profit and loss settings such as organisations, the signalling value of forecasting must compete with its information value:
Even in business, champions need to assemble supporting political coalitions to create and sustain large projects. As such coalitions are not lightly disbanded, they are reluctant to allow last minute forecast changes to threaten project support. It is often more important to assemble crowds of supporting “yes-men” to signal sufficient support, than it is to get accurate feedback and updates on project success. Also, since project failures are often followed by a search for scapegoats, project managers are reluctant to allow the creation
He points out that, while prediction markets are best able to incentivise information holders to provide accurate forecasts, institutional respect for accuracy is a necessary and thus far absent precondition to their widespread uptake.
John H. Cochrane turns the tables by arguing that unforecastability is a good sign as seen through the lens of economics:
In fact, many economic events should be unforecastable, and their unforecastability is a sign that the markets and our theories about them are working well.
This statement is clearest in the case of financial markets. If anyone could tell you with any sort of certainty that “the market will go up tomorrow,” you could use that information to buy today and make a fortune. So could everyone else. As we all try to buy, the market would go up today, right to the point that nobody can tell whether tomorrow’s value will be higher or lower.
An “efficient” market should be unpredictable. If markets went steadily up and delivered return without risk, then markets would not be working as they should.
Forecasting, in the sense of accurately trying to predict the future, is a “fool’s game”. But it does work as an input into risk management:
The good use of “forecasting” is to get a better handle on probabilities, so we focus our risk management resources on the most important events. But we must still pay attention to events, and buy insurance against them, based as much on the painfulness of the event as on its probability. (Note to economics techies: what matters is the risk-neutral probability, probability weighted by marginal utility.)
So it’s not really the forecast that’s wrong, it’s what people do with it. If we all understood the essential unpredictability of the world, especially of rare and very costly events, if we got rid of the habit of mind that asks for a forecast and then makes “plans” as if that were the only state of the world that could occur; if we instead focused on laying out all the bad things that could happen and made sure we had insurance or contingency plans, both personal and public policies might be a lot better.
Cochrane defends a hedgehog-like reversion to principles – basic economic principles like supply and demand – in order to build effective conditional forecasts which inform plans and provide decision support.
Bruce Bueno de Mesquita argues that expert prediction is, properly contextualised, a sideshow. Statistical methods are widely used, so much so that we’ve ceased to notice (e.g. in insurance pricing and political polling). Game theory is better still, and continues to make incremental progress:
Are these methods perfect or omniscient? Certainly not! Are the marginal returns to knowledge over naïve methods (expert opinion; predicting that tomorrow will be just like today) substantial? I believe the evidence warrants an enthusiastic “Yes!” Nevertheless, despite the numerous successes in designing predictive methods, we appropriately focus on failures. After all, by studying failure methodically we are likely to make progress in eliminating some errors in the future.
So why do we continue to focus on the poorly performing experts? De Mesquita’s view is that:
Unfortunately, government, business, and the media assume that expertise—knowing the history, culture, mores, and language of a place, for instance—is sufficient to anticipate the unfolding of events. Indeed, too often many of us dismiss approaches to prediction that require knowledge of statistical methods, mathematics, and systematic research design. We seem to prefer “wisdom” over science, even though the evidence shows that the application of the scientific method, with all of its demands, outperforms experts.
De Mesquita goes on to explain and advocate his own game theoretic (Expected Utility Model) approach:
Acting like a fox, I gather information from a wide variety of experts. They are asked only for specific current information (Who wants to influence a decision? What outcome do they currently advocate? How focused are they on the issue compared to other questions on their plate? How flexible are they about getting the outcome they advocate? And how much clout could they exert?). They are not asked to make judgments about what will happen. Then, acting as a hedgehog, I use that information as data with which to seed a dynamic applied game theory model. The model’s logic then produces not only specific predictions about the issues in question, but also a probability distribution around the predictions. The predictions are detailed and nuanced. They address not only what outcome is likely to arise, but also how each “player” will act, how they are likely to relate to other players over time, what they believe about each other, and much more.
In the Conversation section, Robin Hanson challenges Cochrane and de Mesquita to produce conditional forecasts and submit them to systematic public measurement and verification. He is doubtful, however, that they will assent:
The sad fact is that the many research patrons eager to fund hedgehoggy research by folks like Cochrane and De Mesquita show little interest in funding forecasting competitions at the scale required to get public participation by such prestigious folks.
Forecasting, he contends, is a domain in which the rewards to affiliation with prominent expertise trump accuracy.
Bruce Bueno de Mesquita replies that the acceptance of his methods in journals, via peer review, is evidence of their having been sufficiently scrutinised; furthermore that no one has been willing to publically compete with him; additionally that he has successfully beaten alternative approaches; and finally that he has made his methods available online.
Robin Hanson responds that more comprehensive standards of proof are required to settle the matter.
Gardner and Tetlock then provide an insightful running summary. In response to Hanson they speculate that the costs of admitting to poor forecasting performance would disenfranchise those currently enjoying their – unjustified in terms of performance – public and organisational reputations:
Open prediction contests will reveal how hard it is [for them] to outperform their junior assistants and secretaries. Insofar as technologies such as prediction markets make it easier to figure out who has better or worse performance over long stretches, prediction markets create exactly the sort of transparency that destabilizes status hierarchies… If these hypotheses are correct the prognosis for prediction markets—and transparent competitions of relative forecasting performance—is grim. Epistemic elites are smart enough to recognize a serious threat to their dominance.
In response to Cochrane they speak up for the value of hedgehogs – more compelling, more visionary, better at envisioning extreme events – but note that the cost of this is that they are more wrong, more often.
They close by welcoming de Mesquita’s willingness to be publically scrutinised, note that the jury is still out in terms of systematic and decomposed measurement of his methods, and caution that:
For many categories of forecasting problems, we are likely to bump into the optimal forecasting frontier quite quickly. There is an irreducible indeterminacy to history and no amount of ingenuity will allow us to predict beyond a certain point.
De Mesquita responds that he welcomes being assessed.
Although Cochrane comes close, none of the discussants explicitly recognises and makes central the difference between forecasting and other activities which organisations call forecasting (i.e. planning and goal setting). I explained this distinction in a previous post, namely:
- Forecasting means objectively estimating the most likely future outcome: “what’s going to happen?”
- Goal setting means putting a target in place, generally for motivational purposes: “what would we like to happen?”
- Planning means establishing an intended course of action, usually to direct the allocation of resources: “what are we going to do?”
This distinction is key because, while all three activities are based on prediction, only in the case of forecasting is predictive accuracy the primary purpose. Organisations can improve all of these, but to do so they need to address three tiers of potential failure:
All the Cato discussants take it as read that, in assessing predictions, they’re operating in an empirical paradigm. In organisations, however, this can’t be taken for granted. Many organisations place prediction either in the wrong paradigm, or no paradigm at all. It’s common for predictive activities and processes to be ritualised and adhered to, but without any systematic error measurement or validation. Gardner and Tetlock acknowledge the “widespread lack of curiosity—lack of interest in thinking about how we think about possible futures” as “a phenomenon worthy of investigation in its own right,” pointing out the wastefulness of remaining ignorant given the resources involved.
Systematic error measurement and validation can’t happen without the right categories being first recognised and agreed upon. Disambiguating forecasting from goal setting from planning is critical. Organisations don’t do this well. Loose language doesn’t help. The same Finance department will update a budget (a plan) and call it a “forecast”, oversee the revision of sales “forecasts” (goals), and publish revenue estimates for the scrutiny of stock market analysts (true forecasts). As an earlier Analyst First post pointed out, these activities, while all reliant on objective estimation, do not share the same benchmarks when it comes to assessing error and value. Forecast error makes sense for forecasting; execution error makes more sense for goal setting and planning.
The Cato discussants all tacitly acknowledge these distinctions, but none recognises its implications when it comes to understanding the way organisations do prediction.
Tetlock’s experiment required that pundits’ anonymity be protected. Participants knew to distance themselves from their projections when they were accountable for accuracy. The implication here is either that pundits are dishonest, or that they recognise that their projections serve a purpose other than informing people about the likelihood of future events. Gardner and Tetlock, and Hanson, acknowledge that punditry is a form of entertainment, has signalling value, and by virtue of this trades off accuracy for clarity and narrative value. As Hanson puts it:
Media consumers can be educated and entertained by clever, witty, but accessible commentary, and can coordinate to signal that they are smart and well-read by quoting and discussing the words of the same few focal pundits. Also, impressive pundits with prestigious credentials and clear “philosophical” positions can let readers and viewers gain by affiliation with such impressiveness, credentials, and positions. Being easier to understand and classify helps “hedgehogs” to serve many of these functions.
Hanson recognises that affiliation with sophistication has signalling value within organisations too. He notes the multiple roles played by managers, including the requirement that they appear impressive enough to attract affiliation and inspire their subordinates:
[C]onsider next the many functions and roles of managers, both public and private. By being personally impressive, and by being identified with attractive philosophical positions, leaders can inspire people to work for and affiliate with their organizations. Such support can be threatened by clear tracking of leader forecasts, if that questions leader impressiveness.
He goes on to describe the motivational impact of managerial ‘overconfidence’:
Often, managers can increase project effort by getting participants to see an intermediate chance of the project making important deadlines—the project is both likely to succeed, and to fail. Accurate estimates of the chances of making deadlines can undermine this impression management. Similarly, overconfident managers who promise more than they can deliver are often preferred, as they push teams harder when they fall behind and deliver more overall.
Incentivising workers to “deliver more overall” is precisely the purpose of goal setting. Consistently producing overshooting projections in this context isn’t necessarily “forecast hypocrisy,” as Hanson characterises it. It may be effective stretch targeting.
Many of the discussants also acknowledge that planning is a different activity from forecasting (and goal setting), but don’t pursue the full implications of this in terms of error and value measurement. The Kenneth Arrow anecdote relayed by Gardner and Tetlock, for example, illustrates that plans are reliant on, but different from, forecasts:
Some [corporations and governments] even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.”
Gardner and Tetlock look also at the role of self-aware (i.e. of limitations) prediction in preparedness planning, comparing the effectiveness of the recent New Zealand and Haiti earthquake responses:
Designing for resiliency is essential, as New Zealanders discovered in February when a major earthquake struck Christchurch. 181 people were killed. When a somewhat larger earthquake struck Haiti in 2010, it killed hundreds of thousands. The difference? New Zealand’s infrastructure was designed and constructed to withstand an earthquake, whenever it might come. Haiti’s wasn’t.
Cochrane seconds this, adding that predictions have scenario generation utility regardless of their accuracy:
Once we recognize that uncertainty will always remain, risk management rather than forecasting is much wiser. Just the step of naming the events that could happen is useful.
In these and other ways, the discussants acknowledge that accuracy isn’t the only purpose of prediction. It should therefore follow that forecast error might not be the only relevant measure.
Much of the discussion contrasts different predictive tools, techniques and approaches: expert judgement, statistical algorithms, prediction markets, game theory. Methodologies and expectations both need to be appropriately calibrated: simple statistical extrapolation works well in some settings, but in complex systems environments the best we can hope for may be a better feel for the probabilities involved.
There are a range of insights here for organisations. Individual human judgement on its own, it is unanimously acknowledged, performs poorly. Statistical algorithms consistently beat the experts. There is general agreement among the discussants that eclecticism is desirable. The clear implication is that organisations should adopt collective intelligence methods.
Tetlock’s wider work on expert political judgement has implications for optimal forecasting team composition (use hedgehogs to generate possibilities and foxes to synthesise and calibrate probabilities). Gardner and Tetlock also call for what we term Decision Performance Management:
Imagine a system for recording and judging forecasts. Imagine running tallies of forecasters’ accuracy rates. Imagine advocates on either side of a policy debate specifying in advance precisely what outcomes their desired approach is expected to produce, the evidence that will settle whether it has done so, and the conditions under which participants would agree to say “I was wrong.” Imagine pundits being held to account.
It’s also worth imagining what sort of environment supports this, as Hanson does in his discussion of a different “social equilibrium”:
A track record tech must be combined with a social equilibrium that punishes those with poor records, and thus encourages rivals and victims to collect and report records. The lesson I take for forecast accuracy is that it isn’t enough to devise ways to record forecast accuracy—we also need a new matching social respect for such records.
He’s right. New ways to record accuracy aren’t enough. We also need to know whether accuracy is the real goal. On the subject of goals, when it comes to organisational planning and goal setting, it may well be that these are best understood in a game theoretic context.
Whatever the case, once they are disambiguated, and because they are all related, empiricism means doing forecasting and goal setting and planning better.
Related Analyst First posts:
- Hedgehogs are foxy when they’re right
- *What’s Wrong with Expert Predictions*
- *The Folly of Prediction*
- Forecast error versus execution error
- Forecasting, goal setting, planning
- Robin Hanson on Information Accounting
- Paying for software is buying insurance
The modern knowledge worker has indeed progressed far past the illiterate, innumerate businessmen of ancient Sumer. They can do their own reading and counting, and many other things besides. They know the rudiments of double entry bookkeeping, though they may not be accountants. They are familiar with laws pertaining to their business, industry and work area, though they are probably not lawyers. They probably know the basics of project management, marketing, human resources or event management, without being an expert in any of those fields.
Thus, most modern knowledge workers can perform basic functions in any of these areas. Where their expertise is stretched, they would usually know how to recruit, retain or collaborate with an expert in any of these areas, brief them on requirements, and understand any advice or directions given.
This laundry list of capabilities is a core of the checklist of skills one would expect from a business course. While one need not be an expert or accredited in any of these areas, we can say that a modern knowledge worker is literate in all of them.
Closer to the topic at hand, the modern knowledge worker is expected to be computer literate, which is to say able to use a computer productively, often in the service of the professional literacies outlined above. Again, they would hopefully know their limitations, and know when to call an expert to repair faults, enable new capabilities or create new tools.
One interesting thing about literacies is that they are often unspoken: few job interviews ask explicitly if one can actually read. Not many more executive interviews ask if one can surf the Web, read a balance sheet, instruct a lawyer or define what a “marketing campaign” is. These things are tacit, assumed knowledge.
While there are indeed islands of expertise in law, IT, accounting, HR, marketing and many other areas in the modern business, these would be crippled if the rest of the business, particularly senior management lacked the minimal literacy required to engage these expert functions, to cooperate with them, instruct them and act on their advice.
Most crucially, a minimal degree of literacy is required to determine if the expert has done a good job, added value or created risk. Again, these processes are largely tacit.
It would be untrue to suggest that these literacies exist perfectly in all businesses. Indeed, one way to assess the effectiveness of knowledge workers, particularly middle and senior managers, is the degree to which they really have a truly literate grasp of the business functions that they interact with on a regular basis.
The Dilbertesque world all too familiar to so many of us exists due to an epidemic of false literacy in some organisations. What is false literacy? It is the ability to impersonate a literacy to another illiterate person. In business, it can be seen as a minimal, inadequate level of literacy, usually consisting of nothing more than buzzwords. To thrive it usually requires a critical mass of absent or false literacy, a lack of influential, literate people, and poor performance measurement. Usually arbitrary politicisation, poor accountability and poor literacy work together. This situation is rarer in smaller, privately owned organizations with majority shareholders. They are more common at the other extreme of the ownership spectrum.
False literacy is often sufficient, or deemed to be so, in some business areas such as recruiting or sales. Here an explicity “laundry list” of features, skills or other factors can be exchanged between buyer and seller without either party actually knowing what any of the terms mean. And perhaps this is enough in recruiting an IT developer with “C++, Java and backend systems”, but more of an issue when it is the way a CEO runs an insurance company.
Another way of looking at false literacy can be found here. The idea of a “cargo cult” helps to define the culture of an organisation where false literacy is the norm.
And now we are finally ready to talk about modern Business Analytics.
Like all good things in a business function not directly supervised by a real owner with a real stake, a good outcome is defined as:
“What appears to be a solution to what is understood to be the right interpretation of what is understood to be a problem”
Performance management can happily coexist with this definition as long as it does not measure anything actually useful, which is to say controversial and relevant.
It helps a lot if the task, its outcome or success criteria are beyond the capacities of those managing it.
This definition works both ways.
The obvious interpretation is that complete nonsense can pass for good Analytics.
The converse is that the success of good Analytics can be easily lost.
The general theme is that the sociology of Analytics can sometimes read like Dilbert.
A postscript to ponder:
While it may true that “if you can’t measure it, you can’t manage it”,
Just because you are measuring something does not mean that you are measuring “it”.
Also, note that if you can measure “it”, it is still possible that perhaps you still cannot manage it, by elementary logic.
About usAnalyst First is a new approach to analytics, where tools take a far less important place than the people who perform, manage, request and envision analytics, while analytics is seen as a non-repetitive, exploratory and creative process where the outcome is not known at the start, and only a fraction of efforts are expected to result in success. This is in contrast with a common perception of analytics as IT and process.
Tags in a CloudAIPIO analyst first Analyst First Chapters analytics analytics is not IT arms race environments big data business analytics business intelligence cargo cults collective forecasting commodity and open source tools complexity data decision automation decision support educated buyer EMC-greenplum forecasting HBR holy trinity human infrastructure incentives intelligence model of analytics investing in data lean startup literacy management culture MBAnalytics operational analytics organisational-political considerations Philip Russom Philip Tetlock prediction markets presales R Robin Hanson Strategic Analytics tacit data TDWI Tom Davenport uncertainty uneducated buyer vendors why analyst first