Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.
That’s Marc Andreessen, venture capitalist and Netscape co-founder, writing in the Wall Street Journal. The piece could just as easily be titled ‘Why Analytics Is Eating The World’. If you substitute “analytics” for “software” throughout his argument largely holds. Many of the businesses cited by Andreessen are not just software-centric, but analytics-centric as well: Google, Amazon, Netflix, Pandora, Facebook, LinkedIn. Such companies compete in arms race environments for extremistan market dominance.
In some industries, particularly those with a heavy real-world component such as oil and gas, the software revolution is primarily an opportunity for incumbents. But in many industries, new software ideas will result in the rise of new Silicon Valley-style start-ups that invade existing industries with impunity. Over the next 10 years, the battles between incumbents and software-powered insurgents will be epic. Joseph Schumpeter, the economist who coined the term “creative destruction,” would be proud.
Two of the incumbents mentioned are Wal-Mart and FedEx—both successful adopters of analytics. Of insurgencies:
Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.
Today, the world’s largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.
Inciting innovation and driving disruption through software—and analytics—is not, however, without its challenges. Andreessen:
[M]any people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution. This is a tragedy since every company I work with is absolutely starved for talent. Qualified software engineers, managers, marketers and salespeople in Silicon Valley can rack up dozens of high-paying, high-upside job offers any time they want, while national unemployment and underemployment is sky high. This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again. There’s no way through this problem other than education, and we have a long way to go.
This echoes two of Analyst First’s core contentions. First, that analytics is first and foremost about human infrastructure. Second, that although it is increasingly a core business literacy, analytics is at the same time beyond the reach of a growing number of workers:
The problem is, basic literacy and arithmetic numeracy is pretty much where it appears to have stopped for all but a new technological elite of scribes. This includes way too many people whose job it is to develop strategy, see “the big picture”, produce “evidence based policy”, hear the arguments of quantitatively skilled advisors or in many other ways interact with, and manage a data-rich world, of changing, poorly understood circumstances, vast uncertainty and with powerful analysis tools just a click away.
This is basically the condition of most people interacting with data in the modern world. These are the people who think that BI=Analytics=Reporting. These are the people who cannot read an XY graph, or trust any data summary more complex than an average. These are the people who when shown any kind of report, dashboard or graph ask to see the raw numbers because they are on firmer ground there, even if the numbers are millions of transactions and no useful inference can be drawn from eyeballing them.
Related Analyst First posts:
All Analytics – The Community for Data Management, Business Intelligence, & Analytics – has invited Analyst First to argue the case for open source software in analytics as part of their Point/Counterpoint series. Our point post, ‘The Case for Commodity & Open-Source Analytics’ is here. The counterpoint post, ‘Downsides Dampen Open-Source Analytics’, by Ajay Ohri, is here. Beth Schultz’s introduction is here.
I encourage our readers to explore the All Analytics site and to comment on the debate (which necessitates free registration).
So far from making us more profligate with information, perhaps the Goddess of ‘big data’ will spur us to be smarter in data selection, and ensure more intelligence is embedded within our data extraction, transformation and reporting processes.
Greg Taylor‘s comments on the ‘Knowing what you’re missing‘ post are spot on. One of the clear implications of the big data explosion—technical challenges aside—is that manual analysis methods simply can’t scale to the volume and velocity at which potentially relevant data is being generated. As such, analytics (particularly of the machine learning variety) is ever more vital. One of my rules of thumb in consulting is that any OLAP cube is a standing business case for predictive modelling. As I put it in the ‘Advanced analytics and OLAP‘ post:
OLAP makes multidimensional data exploration about as fast and intuitive as it can be when a human is doing the driving. This means being able to arrange on screen, in two dimensions (perhaps taking advantage of colour and shape to visualise a third and fourth), relatively small subsets and arithmetical summaries of data. Advanced analytics, however, automates exploration. Only data mining methods can look at all dimensions simultaneously, at all levels, in combination. And they can do this in unsupervised (looking for natural structure in the data) or supervised (inferring input-outcome relationships) modes.
- So as Analysts search more broadly for relevant data to meet the decision making requirements of management, perhaps they need to increasingly ask themselves: how will this piece of information fit within the network of predictive functions which explains the business?
- How might Analysts apply Occam’s razor to ensure only information which contributes predictive understanding is included, given the exponential growth in the potential data sources that could be used? One logical approach is for Analysts to undertake more experimental testing of variables (and transformations) for their explanatory power with respect to business outcomes.
As the earlier post reported, status quo electronic infrastructures aren’t ready for big data, and new technologies and disciplines are evolving rapidly to close the gap. But even more substantive changes are required of organisations’ human infrastructures. The key transition that business users of data need to make in the big data context is from the default of consuming more data to the practice of consuming data of higher value. This means becoming analytically literate and learning how to trust and leverage analysts. The key role that analysts must play in supporting decision makers is to understand what constitutes higher value, and to seek it out and communicate it. The key role for IT functions and BI managers is to enable analysts to enable decision makers.
Related Analyst First posts:
The tricky thing here is that once a system gets automated it’s all too easy to kind of let it keep running, and there’s no better way to lose money quickly than to have a bad decision algorithm keep running in an automated fashion. So I think it’s very critical for managers to understand what’s behind these things—what is the underlying logic. All of these systems and these algorithms make assumptions about the world. In the case of the financial crisis the assumption that was violated was that housing prices would continue to go up and people with subprime mortgages would continue to be able to refinance or pay them back. And when you see the world changing in some critical way like that, that’s the time to step in and say “you know, our models aren’t fitting anymore.” Now there are semi-automated tools that will let you look at your predictive models and say, “the fit isn’t quite what it used to be,” and kind of let you know that your fit is decreasing over time—the fit of your models to the actual data. But that is not true for rule based systems. It’s only true for algorithmic based systems, and even those systems are not very widely used. A few banks have applied them, but it requires a level of sophistication and caution in the use of these that most organisations haven’t really mastered.
That’s Tom Davenport in a podcast interview with Frank Comis at the McKinsey Quarterly (requires free registration). The interview and its accompanying article, ‘Rethinking Knowledge Work: A Strategic Approach’, also by Davenport and discussed in a previous Analyst First post, comprise Week 1, Day 5 of the CORTEX MBAnalytics program. The core of his argument is that the free-access model—which has for the last two decades allowed high-end knowledge workers the freedom to use their mobile phones, surf the internet, and do what they like with office productivity software—has not increased these workers’ productivity. According to some studies, productivity has in fact declined. Productivity gains have, however, been realised over this same time period through the deployment of structured-provisioning systems to low-end knowledge workers (case management, workflow, document management, business process management, and so on). Davenport summarises that we have, if anything ‘over-automated at the low end and under-automated at the high end.’
His conclusion is that more “structure” needs to be integrated into high-end knowledge work, but in advocating this he’s acutely aware of the challenges it introduces: worker resistance, demotivation, and reduced loyalty; performance measurement and management ambiguities; and the hazards of automation (on which he is elaborating in the quotation above).
I argued in the previous post that Davenport’s ‘Rethinking Knowledge Work’ represented a change of direction from his landmark ‘Competing on Analytics‘ body of work. Having now listened to the podcast, I’ve changed my mind. I think he’s actually asking the logical next question: Why are so few organisations competing on analytics? Davenport has spent years studying and understanding how analytics is being used by a handful of firms to carve out new markets and disrupt existing ones. He’s now attempting to reconcile that demonstrated potential with the overwhelming lack of productivity across the vast majority of the “knowledge worker” cohort, of which he himself is a member. Instituting value extraction from analytics, as he says “requires a level of sophistication and caution… that most organisations haven’t really mastered.”
Related Analyst First posts:
Any analysis can be understood as the intersection of audience and subject. In the Business Analytics context, typical audiences are you, your customers, and your prospects. Typical subjects—for analyses that model human behaviour as opposed to other processes—include yourself, your customers, your competitors, and your adversaries. Some examples:
- Performance Management: for the organisation, about itself—e.g. employee scorecards, HR cubes, management reporting
- Most BI: for the organisation, about its customers—e.g. sales cubes
- Customer Intelligence: for the organisation, about its competitors and prospects
- Risk Intelligence: for the organisation, about its adversaries
- Most B2C Analytics: for customers and prospects, about customers—e.g. a commerce website’s recommendations engine
Most BI is employee-facing. Most analytics, as it gets operationalised, is aimed at customers and prospects in the form of surveys, experiments, recommendations, and targeted interactions and offers.
Day 3 Week 1 of the CORTEX MBAnalytics program covers ‘Competing on Talent Analytics‘, by Davenport, Harris, and Shapiro, from the October 2010 Harvard Business Review. The essay describes the application of advanced analytics methods more typically aimed at customers to employees. This might otherwise be termed ‘talent analytics’ or ‘HR analytics’ or ‘Performance Management analytics’.
Philip Russom at the TDWI Blog:
The current hype and hubbub around big data analytics has shifted our focus on what’s usually called “advanced analytics.” That’s an umbrella term for analytic techniques and tool types based on data mining, statistical analysis, or complex SQL – sometimes natural language processing and artificial intelligence, as well.
The term has been around since the late 1990s, so you’d think I’d get used to it. But I have to admit that the term “advanced analytics” rubs me the wrong way for two reasons:
First, it’s not a good description of what users are doing or what the technology does. Instead of “advanced analytics,” a better term would be “discovery analytics,” because that’s what users are doing. Or we could call it “exploratory analytics.” In other words, the user is typically a business analyst who is exploring data broadly to discover new business facts that no one in the enterprise knew before. These facts can then be turned into an analytic model or some equivalent for tracking over time.
Second, the thing that chaffs me most is that the way the term “advanced analytics” has been applied for fifteen years excludes online analytic processing (OLAP). Huh!? Does that mean that OLAP is “primitive analytics”? Is OLAP somehow incapable of being advanced?
His answer is that OLAP can indeed be advanced, but not in the same way as advanced analytics. There are important differences:
In my mind, advanced analytics is very much about open-ended exploration and discovery in large volumes of fairly raw source data. But OLAP is about a more controlled discovery of combinations of carefully prepared dimensional datasets. The way I see it: a cube is a closed system that enables combinatorial analytics. Given the richness of cubes users are designing nowadays, there’s a gargantuan number of combinations for a wide range of users to explore.
This is a useful distinction. Although exploratory, OLAP is a controlled and self-contained environment. Someone has decided which dimensions to include and which to exclude, and how they should be structured, and how deep they should go, and how they should be summarised, and so on. Large cubes will indeed offer users a “gargantuan number of combinations”, but this does not necessarily make exploration a richer activity. It may in fact make it ineffective. Manually slicing and dicing twenty dimensions certainly risks being inefficient.
Here it’s helpful to draw a further distinction between OLAP and advanced analytics. OLAP makes multidimensional data exploration about as fast and intuitive as it can be when a human is doing the driving. This means being able to arrange on screen, in two dimensions (perhaps taking advantage of colour and shape to visualise a third and fourth), relatively small subsets and arithmetical summaries of data. Advanced analytics, however, automates exploration. Only data mining methods can look at all dimensions simultaneously, at all levels, in combination. And they can do this in unsupervised (looking for natural structure in the data) or supervised (inferring input-outcome relationships) modes.
Consider also the role of assumptions. OLAP is, as a data exploration vehicle, fairly assumption-laden. Each cube-based analysis reflects, at a minimum, the assumptions of the cube’s user on top of those of its architect. Data mining methods, by contrast, are naïve by design, deliberately insulating exploration from human biases.
Russom argues convincingly against the notion that advanced analytics will render OLAP obsolete:
In defense of OLAP, it’s by far the most common form of analytics in BI today, and for good reasons. Once you get used to multidimensional thinking, OLAP is very natural, because most business questions are themselves multidimensional. For example, “What are western region sales revenues in Q4 2010?” intersects dimensions for geography, function, money, and time.
This is a good illustration of my contention that BI provides context: what happened, where, as described by existing measures and dimensions. What OLAP can’t do is address more causal and complex business questions like:
- What increases sales?
- What predicts loyalty?
- Who is most likely to be loyal in future?
- What is the loyalty profile of the western region?
- How does it compare to other regions?
Such questions are best tackled by advanced analytics in symbiosis with BI.
Related Analyst First posts:
Industry watchers have been talking up “advanced analytics” for a couple of years now — with no clear indication that the market was ready to follow suit. New market research finds that demand for traditional end-user query, reporting, and analysis technologies continues to outpace demand for advanced analytic technologies.
That’s from Stephen Swoyer at TDWI, commenting on International Data Corp.’s (IDC) recent Worldwide Business Intelligence Tools 2010 study. The study finds that demand for advanced analytics is growing in absolute terms, but that query, reporting and analysis is growing at a faster rate. This supports my contention that query and reporting basics remain a problem for organisations of all sizes.
According to IDC:
“The [advanced analytics market] continues to be dominated by SAS and IBM — which combined hold a 51.4 percent market share — and is therefore more strongly influenced by the performance of just these two vendors,” writes analyst Dan Vesset in the IDC report.
In BI, the post-consolidation megavendors rule the roost:
“The largest of IT companies continue to dominate the BI tools market and to consolidate market share,” writes Vesset, who notes that large IT companies such as IBM Corp., Oracle Corp., and SAP AG — among others — now control more than three-quarters (75.3 percent) of the entire BI market.
It would be interesting to know what proportion of these sales are pure BI or driven primarily by BI requirements, versus BI having been bundled with other software and/or hardware. There is some suggestion in the figures that the shopping cart model accounts for a good deal of the growth. BI grew by 11.4 percent in 2010 but only by 2 percent in 2009. It’s plausible that it took until 2010 for the acquisitions of Hyperion, Business Objects, and Cognos to find their feet in the larger sales machines of Oracle, SAP, and IBM.
Rounding out the market, the “BI-only vendors such as MicroStrategy Inc., SAS, and Information Builders Inc. (IBI) have fortified their markets”, and the emerging pure-play vendors “such as QlikTech International AB, Tableau Software, and Panorama Software have continued to outpace the market, growing at a rate several times that of the BI market as a whole.”
All of this reads consistently with the recent Dresner Advisory Services study.
It needs to be remembered that these measures of market share are fundamentally incomplete. They relate only to the commercial market and exclude commodity and open source software. I’ve pointed out previously that this is an understatement of how much Business Analytics activity is going on inside organisations. It’s only a software view, and even then it ignores a lot of the software actually used by analysts.
In interpreting these commercial trends it’s worth understanding the interrelationship between BI and advanced analytics, as well as the nature of the hype cycle.
I would expect an increase in attempts at advanced analytics to drive up BI. BI and advanced analytics are symbiotic. In consulting, my rule of thumb is that every one part of advanced analytics means five parts of BI. BI provides context. A predictive model that scores customers for their likelihood of future churn isn’t going to be valued unless historical churn and its revenue impact are also being reported. Nor is a better statistical forecast going to be appreciated unless actual time series monitoring and appropriate forecast error measurement are in place.
The persistence of query and reporting needs should also be reconciled with the high failure rate of BI initiatives. Successful BI isn’t easy. Most organisations try it more than once, and the default method of attempting it – not always for good reasons – involves purchasing new software. In this context, the hype that Swoyer mentions plays a role. Query and reporting initiatives which successfully affiliate themselves with emerging technology trends are given new life. Organisations get second and third chances to get BI right – under the moniker of being innovative. This may explain the degree to which the industry buzz around advanced analytics has exceeded market performance. It also explains why enterprise search, mobile platforms, big data, social media, and cloud computing are likely to be invoked in the context of contemporary BI initiatives.
Related Analyst First posts:
- The perennial problem
- Measuring the Business Analytics software market
- *Snapshot on the Business Intelligence Market*
- *BI Vendors: Eat Your Own Dog Food*
- Data is a spellword
- Solution buying
- Paying for software is buying insurance
- The Big Difference
Respondents to the survey indicated that they want BI applications that can easily integrate with major enterprise applications from the likes of Oracle and SAP. Do smaller or niche BI software vendors have a track record of problems when it comes to integrating with such applications?
[Rick] Sherman: You know, it’s funny if you think about 10 years ago versus now. Ten years ago, the smaller vendors didn’t have access to — and there wasn’t as much knowledge as to what was in — SAP or Oracle apps. But, especially with services and SOA and everything else coming out, the ability to access the enterprise applications has gotten easier and easier. So I really don’t think that’s as big an issue today.
That’s from SearchBusinessAnalytics.com on the results of their March 2011 survey. The interview is with Rick Sherman, “the founder of Athena IT Solutions, a Stow, Mass.-based firm that provides data warehouse and business intelligence consulting, training and vendor services.” Sherman speculates on what might have caused the increase in concerns about data integration problems which the survey brought to light:
I think what’s new is the fact that the [data integration] issues are more visible and more people have access to BI or are trying to do reporting, analytics and BI than ever before. It isn’t that the problems are new. It’s that they’re more visible because more people are encountering them. The other point [from] a business user context [is that they are] initially trying to do reporting and analysis from an existing operational application [and it's just one] source. [There] might be data quality issues, but they do not have to integrate data, because they’re getting it from one source. As soon as you start doing that, you start needing to get data from other applications, and that’s when you start encountering more data quality issues and then more data integration issues.
The distinction between data access, data integration, and data quality is a useful one. Sherman is arguing that data access is a problem of the past, and that the real contemporary challenges are in data integration, not access.
I think that what happens [is that at] the larger firms you have SAP and then you have all the enterprise apps that Oracle has acquired over the years. You’ve got two major spheres of application knowledge that you have to have. But when you get down to the SMBs [small and medium-sized businesses], there are hundreds of enterprise apps, such as financial apps geared toward smaller firms and, more importantly, you start getting into industry-focused applications. [The challenge for SMBs is that they] have a lot more BI apps that might not have as much knowledge as to how to access their [enterprise apps] or those enterprise apps might not be as open as SAP and Oracle are. SMBs also have the issue of figuring out how to integrate with a lot more sources than if you’re talking about larger firms.
But, seemingly at odds with this, survey respondents indicated “that they want BI applications that can easily integrate with major enterprise applications from the likes of Oracle and SAP.” And reading that closely, what they mean by “integrate with” is “access”. So what’s going on?
Sherman addresses the problem in terms of technology (tools) and know-how (knowledge). In fact, there are at least two more dimensions to it. One is complexity and the other is politics.
Yesterday’s post challenged the assumption that the market growth of emerging BI vendors must be coming from small to medium businesses, pointing out that it might also be coming from autonomous business units within larger organisations. Sherman appears to be making a similar conflation – implying, because larger organisations run enterprise systems from Oracle and SAP, that they don’t have anything else. It’s certainly the case that, as he says, smaller businesses run niche applications geared towards their needs (and budgets), but it simply doesn’t follow that large businesses only run enterprise applications. I would expect the opposite to be true. Large organisations become large in part through acquisition, and large organisations have more moving parts. Large organisations are more complex. I would expect them to be running more systems, not less, many of them niche, heavily customised, and developed in-house. Sherman’s larger point is well taken: less common systems are harder to access, and the more systems in place, the harder the integration effort. But there’s no reason to think this problem has gone away for large organisations because they’ve purchased enterprise apps.
Enterprise systems are more IT-reliant, meaning more organisational functions between users and data, increased layers of internal policy compliance, additional bureaucracy, depersonalised communication channels, more dispersed knowledge, and as two previous posts have argued, divergent incentives.
The common thread running through all of this is that business systems have been consistently poor at making their data available to analysts in manipulable form. Routine access to verbose data – native, unsummarised and readily tractable – is a perennial problem. Verbose data at its most basic doesn’t need to be clean or integrated, just available. That is all many analysts need.
In technical terms this translates into a query and reporting deficit. Over the last decade or so I’ve watched as the same elemental query and reporting needs have piggybacked on a succession of ‘sexier’ requirement sets, such as:
- Business Intelligence
- KPIs, Dashboards and Scorecards
- Performance Management
- Business Analytics
Also attaching to various parallel technological trends:
- Enterprise Search
- Web 2.0
- Mobile Platforms
- Cloud Computing
- Social Media
- Big Data
It’s furthermore the case that query and reporting needs repeatedly merge themselves with the related but different objectives of various data management and infrastructural projects, for example:
- Data Warehousing
- Master Data Management
- Data Quality
- Data Governance
None of this takes anything away from any of the above disciplines, each of which tackles real and distinct problems. The point is simply that basic query and reporting remains a problem.
Related Analyst First posts:
That’s the title of today’s Harvard Business Review Management Tip of the Day, which is worth reproducing in full:
Differentiating your company based on products or cost is near impossible these days, especially in crowded industries. Instead, pull ahead of the pack by using data-collection technology and analysis to get value from all of your business processes. Analytics let you discern not only what your customers want, but how much they’re willing to pay and what keeps them loyal. It also arms your employees with the evidence and tools they need to make sound decisions. Start by championing analytics from the top. Acknowledge and endorse the changes in culture, process, and skills that analytics competition requires. Be sure that you understand the theory behind various quantitative methods so you can recognize their limitations. If necessary, bring in experts who can advise on how to best apply analytics to your business.
Related Analyst First posts:
The difference, and relationship, between data and information is a common debate. Not only do these two terms have varying definitions, but they are often used interchangeably.
Just a few examples include comparing and contrasting data quality with information quality, data management with information management, and data governance with information governance.
That’s Jim Harris at Information Management. He cites the distinctions commonly made between data, information, knowledge, and wisdom, arguing that the term Knowledge Management makes a lot of sense as a way of describing the goals of business intelligence:
I can’t help but wonder if the debate about data and information obfuscates the fact that the organization’s appetite, its business hunger, is for knowledge.
He concludes with three insightful questions, designed to determine whether the distinctions are consequential or merely linguistic:
- Does your organization make a practical distinction between data and information?
- If so, how does this distinction affect your quality, management, and governance initiatives?
- What is the relationship between those initiatives and your business intelligence efforts?
The post is interesting to me because it catalogues various attempts to get away from the word “data”. In my experience, “data” is a spellword. Its invocation gives people permission to tune out and to dismiss what follows as technical, geeky, and irrelevant. Business Analytics practitioners themselves don’t do this, of course, but businesspeople often do. Status signals are important inside organisations, and by virtue of association, “data” is lower status than “information” or “knowledge”. All other things being equal, Information Management therefore carries greater business cachet than Data Management.
Knowledge is further up the value chain than information, and as a previous post has noted, linguistic slipperiness is common in business. So why not Knowledge Management? Harris notes that the term is no longer in vogue in the business world. The key reason, I suspect, is that enough initiatives coined ‘Knowledge Management’ when the term was still fresh went on to fall short of expectations.
The last fifteen years saw a lot of reporting get rebranded as Business Intelligence, and we are now seeing lots of Business Intelligence getting rebranded as Analytics. Lots of Business Intelligence efforts have disappointed their sponsors, so I wasn’t surprised when a Melbourne colleague told me yesterday that Business Intelligence was becoming a dirty word around town.
Related Analyst First posts:
About usAnalyst First is a new approach to analytics, where tools take a far less important place than the people who perform, manage, request and envision analytics, while analytics is seen as a non-repetitive, exploratory and creative process where the outcome is not known at the start, and only a fraction of efforts are expected to result in success. This is in contrast with a common perception of analytics as IT and process.
Tags in a CloudAIPIO analyst first Analyst First Chapters analytics analytics is not IT arms race environments big data business analytics business intelligence cargo cults collective forecasting commodity and open source tools complexity data decision automation decision support educated buyer EMC-greenplum forecasting HBR holy trinity human infrastructure incentives intelligence model of analytics investing in data lean startup literacy management culture MBAnalytics operational analytics organisational-political considerations Philip Russom Philip Tetlock prediction markets presales R Robin Hanson Strategic Analytics tacit data TDWI Tom Davenport uncertainty uneducated buyer vendors why analyst first