Founder Stephen Samild presented some new ideas on Business Intelligence :
There may be a more detailed post on the subject by Stephen at a later stage.
Yours truly was interviewed by CeBIT. The topic was broadly : “What the heck is this Data Science Thing Anyway”.
On Tuesday night I presented Getting started with Predictive Analytics in the Public Sector to a public meeting of Analyst First in Canberra.
The presentation itself is an update of one given in June to Canberra’s IBM Business Analytics User Group. For this version I added material describing how analytics supports the risk management cycle, and incorporating some insights from Jim Manzi’s excellent Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society.
Part 1 of the highly recommended Uncontrolled covers the evolution of the scientific method (from Bacon on experimentation, to Hume on induction, to Popper on falsification, to Kuhn on scientific paradigms, through to the present day). Part 2 looks at the development of randomised field trials in the latter half of the twentieth century and their applications in medicine and business (i.e. analytics). Part 3 advocates the more widespread and systematic use of randomised field trials to areas of public policy, learning from the business experiment revolution.
Our thanks to BAE Systems for providing the venue.
Ted Cuzzillo, writing at TDWI and citing Blake Johnson of Stanford, identifies 6 conditions for [or barriers to] the rise of business analysts:
1. The best analysts are skilled in three areas: First, they engage stakeholders and have an eye for business opportunity. Second, they inspire stakeholders’ trust with consistently excellent analysis. Third, “big data” requires skill with data management and software engineering.
This paints a similar but not identical picture to Drew Conway’s Data Scientist Venn Diagram. The key point of difference is that Conway places more weight on mathematical and statistical training, which is not the same thing as “consistently excellent analysis”, but is more important than is often assumed in enabling it.
2. Each analyst’s skills should be about 80 percent in data management and about 20 percent in business and analytics — but Johnson expects that to change over the next five or 10 years as tools make data management easier. Eventually the mix of skills will be the opposite: 20 percent data management and 80 percent business.
I have no strong view on this, but my intuition is that data wrangling will always consume far more time and effort than analysis. Analysis is a feedback loop and a read-write activity. Standardisation and automation continue to consolidate efficiencies but these tend to raise the analytical bar. That said, I’d be happy for future tools to prove me wrong.
3. Gaining a foothold within an organization is best done in small bites with an entrepreneurial approach. Forget trying for a “big bang,” he says. Instead, find a need and fill it quickly, then move on to others. Identify and solve one business problem after another — always making sure to keep your methods scalable.
This agrees with Analyst First’s contention, seconded by others, that the monolithic IT project approach doesn’t work, and that—within an existing organisation—a bottom-up Lean Startup approach is your best bet. The only exceptions to this are analytic-centric online startups and quantitative hedge funds.
4. Location of analysts’ workspace matters. They should work in a cluster for critical mass, which encourages sharing of best practices and support. If they sit within business teams, their work becomes more visible.
This makes sense. Isolated analysts are a problem whether they’re isolated from each other or from management oversight and direction. Generally speaking, senior executives need to be broadened while analysts need to be narrowed. Middle managers need to be skilled up to bridge between the two.
5. It’s an adjustment for everyone — on the business side but especially on the IT side. It means fundamental changes in the way data is organized and managed, and accessed and used, with both new technologies and skill sets.
6. Many IT pros deny access to data based on obsolete knowledge. Johnson reports that many don’t know about modern load-balancing and other technology that make such access safer.
Certainly true. I’ve written before about the data needs of analysts as distinct from traditional business intelligence consumers, and also observed that big data is at once driving up the need for advanced analytics and rendering traditional data warehousing approaches obsolete. But the odd part about the commonly invoked ‘IT vs business’ balance of power is the acceptance of IT as a ‘stakeholder’ as opposed to an enabler. It’s unquestionably the case that analytics doesn’t happen without software, but that’s just as true of accounting, graphic design, and most other activities conducted in front of computers in today’s workplace. It simply doesn’t follow that IT deserves, so to speak, a seat on the Security Council.
Cuzzillo closes well aware of both the future possibilities for Business Analytics, and the status quo political realities standing in its way:
You would think that both sides would sign up for the bargain the new middlemen [i.e. analysts] seem to offer. IT would cede control and concentrate on what it does best, managing the back end. Meanwhile, business stakeholders would get insights from these newly empowered, eager specialists. Analysts would be newly ready to answer business questions, conjure up new questions, and offer strategic options.
Analysts would colonize what had been the no-man’s-land between IT and business. Trouble is, the analysts may end up ruining the neighborhood for them. If the strategies Johnson suggests work, IT and business would find a new power growing alongside them. Analysts — simply from the position they would find themselves in, not from any wish to rule the world — would be indispensible, powerful, and well funded.
Who wouldn’t want that?
Related Analyst First posts:
- Analytics Education and Recruitment – Builders vs Finders
- Analysis is read-write
- Forrester on the need for agility
- Analytics Is… A Lean Startup Enterprise
- *Why Software Is Eating The World*
- The data needs of analysts
- Big data as an advanced analytics driver
- *Building for Yesterday’s Future*
- *The Elusive Definition of Agile Analytics*
We’re building new information delivery systems for a future that isn’t there. Our state-of-the-art environments are already becoming obsolete because our view is distorted by the lens of the past, showing us the future as it was years ago. That world of scarce computing resources and limited data is gone.
That’s Mark Madsen at TDWI, arguing that many of the key assumptions driving our construction of analytic systems—decision support systems, data warehouses, and business intelligence—are wrong. The first wrong assumption is of scarcity. Processor cycles, memory, and storage used to be expensive. They aren’t any longer, but we’re still batch processing our ETL, prematurely archiving, summarising and normalising our data, and limiting our storage of derived information.
His second target is the tabula rasa impulse:
Most data warehouse and BI methodologies assume that you start with no analysis systems in place. The methodologies were created at a time when information delivery meant reports from OLTP applications.
The reality today is that analytics projects don’t start with a clean slate. Reporting and BI applications are common in different parts of the organization.
Third is the assumption of stability. Build-from-scratch methodologies made sense the first time around, but:
By not focusing on evolution, the methodologies miss a key element about analytics: they often focus on decisions that change business processes. Process change means the business works differently and new data will be needed. When someone solves a problem, they move on to a new problem. The work is never done because an organization is constantly adapting to changing market conditions.
One of Analyst First’s key principles is that:
Analytics is not a linear process, like most engineering projects. Its end product is discovery: you cannot determine what will be discovered ahead of time. Thus the outcomes of analytics, and the decisions based on them, cannot be made before the analysis has been carried out.
Consequently we advocate the primacy and ongoing centrality of strategic analytics over operational analytics. The more analytics is conceived of as a set of activities which only adds value at existing operational margins, the more it is unnecessarily constrained and the less it is able to change the game. Echoing yesterday’s Analyst First post on analysis as a read-write activity, Madsen continues that:
Business intelligence methods and architecture assume that what’s being built is a single system to meet all data needs. We still think of analytics as giving reports to users. This ignores what they really want: information in the context of their work process and in support of their goals. Sometimes reports are sufficient; sometimes more is needed.
He goes on to confirm that big data is both challenging status quo electronic infrastructures and driving demand for higher value advanced analytics:
The interaction model for BI delivery is that a user asks a question and gets an answer. This only works if they know what they are looking for. Higher data volumes, more sophisticated business needs, and high-performance platforms require that BI be extended to include advanced analytics. These answer “why” questions that can’t be answered by the simple sums and sorts of BI.
As I’ve argued before, the assumption-heaviness and manual intensiveness of standard BI technologies such as OLAP can’t compete, at scale, with the automated exploration that machine learning methods make possible. Madsen concludes that the data warehouse should be conceived of as a platform rather than an application. His closing four paragraphs are worth quoting in full:
The data warehouse has evolved to the point where it needs to provide data infrastructure, and needs to support information delivery by other applications rather than trying to do both. Data infrastructure requires a focus on longer planning horizons, stability where it matters, and standardized services. Information delivery requires meeting specific needs and use cases.
Design methods today seldom address the need to separate data infrastructure from delivery applications. Designs focus on data management and fitting the database to the delivery tools. This leads to IT efforts to standardize on one set of user tools for everything, much like Henry Ford tried to limit the color of his cars to black.
The new needs and analysis concepts go against the idea that a data warehouse is a read-only repository with one point of entry. They do not fit with established ideas, tools, and methodologies.
Today, the tight coupling of data, models and tools via a single SQL-based access layer prevent us from delivering what both business users and application developers need. The data warehouse must be split into data management infrastructure that can meet high-performance storage, processing, and retrieval needs, and an application layer that is decoupled from this infrastructure. This separation of storage and retrieval from delivery and use is a key concept required by data warehouse architectures as business and technology move forward.
Related Analyst First posts:
If you’ve attended more than a few Business Analytics conferences you’ll have seen numerous non-vendor presentations which fall into one of two categories.
The Honeymoon Presentation chronicles the experiences of a solution buyer. It’s often co-presented or sponsored by a software vendor, and is very common at vendor-organised forums such as user conferences. It covers the identification of a business problem, the formation of a project team, the creation of a business case, the development of business requirements, the challenges of stakeholder management, the discovery of trade-offs, and the resulting criteria and process by which a supplier was selected. It closes with ambitions and plans for the future. It may not be explicitly stated by the presenter, but nothing has been rolled out yet. That is, it reflects the perspective of an organisation in the early stages of Business Analytics, having followed the default, top-down, project-based, IT-centric model.
The Proud Parent Presentation comes from this same perspective but is delivered after a project phase has been successfully completed. As such it will typically relay much of the above narrative, adding an overview and some screenshots of the system that has been implemented and closing with some ‘lessons learned’ in the process. Most Proud Parent presentations are fairly interchangeable in terms of the generic outcomes described. In the BI context, typical outcome claims are things like “now our business users have more information at their fingertips” and “for the first time we have one version of the truth”. What varies most are the characteristics of the presenting organisation: industry, size, products, customers, and so on.
There are two types of presentation you most want to see but never do. The first is the Failure Presentation. Rather than being a story of general success sprinkled with a few tips and tricks picked up along the way, it’s about the hard lessons that only painful reflection on resounding failure can teach. Then there’s the Arms Race Presentation, in which an organisation reveals the inner workings of how it competes on analytics, manages its strategic threats, and stays a step ahead of its adversaries. I have never seen either of these presentations at public conferences, but they do exist. They’re the most informative, but for obvious reasons the least likely to see the light of day on the conference circuit.
Related Analyst First posts:
Any analysis can be understood as the intersection of audience and subject. In the Business Analytics context, typical audiences are you, your customers, and your prospects. Typical subjects—for analyses that model human behaviour as opposed to other processes—include yourself, your customers, your competitors, and your adversaries. Some examples:
- Performance Management: for the organisation, about itself—e.g. employee scorecards, HR cubes, management reporting
- Most BI: for the organisation, about its customers—e.g. sales cubes
- Customer Intelligence: for the organisation, about its competitors and prospects
- Risk Intelligence: for the organisation, about its adversaries
- Most B2C Analytics: for customers and prospects, about customers—e.g. a commerce website’s recommendations engine
Most BI is employee-facing. Most analytics, as it gets operationalised, is aimed at customers and prospects in the form of surveys, experiments, recommendations, and targeted interactions and offers.
Day 3 Week 1 of the CORTEX MBAnalytics program covers ‘Competing on Talent Analytics‘, by Davenport, Harris, and Shapiro, from the October 2010 Harvard Business Review. The essay describes the application of advanced analytics methods more typically aimed at customers to employees. This might otherwise be termed ‘talent analytics’ or ‘HR analytics’ or ‘Performance Management analytics’.
Most of the literature on why BI projects fail tends to put some variant of ‘a lack of communication between IT and the business’ near the top of the list of culprits. This implies a relationship between two parties prone to friction. Either IT or ‘the business’ can own the BI initiative, but each needs the other in order for it to progress. In crude terms, the business has information needs; IT has data and the electronic infrastructure required to access it, transform it, and publish it as information. The two parties don’t speak the same language so each has to second-guess the other. Each is also required to generalise the multiple views of its members into a unified position for representation and negotiation purposes. The result is an equilibrium conception of the information needs of business consumers which may only poorly approximate each individual user’s actual needs.
The data needs of analysts are, by comparison, easier to serve than the information needs of business consumers. Analysts want data in closer to its raw form, and are less reliant on others for its transformation into packaged information. There are fewer representation and negotiation steps between demand and fulfilment.
Related Analyst First posts:
Philip Russom at the TDWI Blog:
The current hype and hubbub around big data analytics has shifted our focus on what’s usually called “advanced analytics.” That’s an umbrella term for analytic techniques and tool types based on data mining, statistical analysis, or complex SQL – sometimes natural language processing and artificial intelligence, as well.
The term has been around since the late 1990s, so you’d think I’d get used to it. But I have to admit that the term “advanced analytics” rubs me the wrong way for two reasons:
First, it’s not a good description of what users are doing or what the technology does. Instead of “advanced analytics,” a better term would be “discovery analytics,” because that’s what users are doing. Or we could call it “exploratory analytics.” In other words, the user is typically a business analyst who is exploring data broadly to discover new business facts that no one in the enterprise knew before. These facts can then be turned into an analytic model or some equivalent for tracking over time.
Second, the thing that chaffs me most is that the way the term “advanced analytics” has been applied for fifteen years excludes online analytic processing (OLAP). Huh!? Does that mean that OLAP is “primitive analytics”? Is OLAP somehow incapable of being advanced?
His answer is that OLAP can indeed be advanced, but not in the same way as advanced analytics. There are important differences:
In my mind, advanced analytics is very much about open-ended exploration and discovery in large volumes of fairly raw source data. But OLAP is about a more controlled discovery of combinations of carefully prepared dimensional datasets. The way I see it: a cube is a closed system that enables combinatorial analytics. Given the richness of cubes users are designing nowadays, there’s a gargantuan number of combinations for a wide range of users to explore.
This is a useful distinction. Although exploratory, OLAP is a controlled and self-contained environment. Someone has decided which dimensions to include and which to exclude, and how they should be structured, and how deep they should go, and how they should be summarised, and so on. Large cubes will indeed offer users a “gargantuan number of combinations”, but this does not necessarily make exploration a richer activity. It may in fact make it ineffective. Manually slicing and dicing twenty dimensions certainly risks being inefficient.
Here it’s helpful to draw a further distinction between OLAP and advanced analytics. OLAP makes multidimensional data exploration about as fast and intuitive as it can be when a human is doing the driving. This means being able to arrange on screen, in two dimensions (perhaps taking advantage of colour and shape to visualise a third and fourth), relatively small subsets and arithmetical summaries of data. Advanced analytics, however, automates exploration. Only data mining methods can look at all dimensions simultaneously, at all levels, in combination. And they can do this in unsupervised (looking for natural structure in the data) or supervised (inferring input-outcome relationships) modes.
Consider also the role of assumptions. OLAP is, as a data exploration vehicle, fairly assumption-laden. Each cube-based analysis reflects, at a minimum, the assumptions of the cube’s user on top of those of its architect. Data mining methods, by contrast, are naïve by design, deliberately insulating exploration from human biases.
Russom argues convincingly against the notion that advanced analytics will render OLAP obsolete:
In defense of OLAP, it’s by far the most common form of analytics in BI today, and for good reasons. Once you get used to multidimensional thinking, OLAP is very natural, because most business questions are themselves multidimensional. For example, “What are western region sales revenues in Q4 2010?” intersects dimensions for geography, function, money, and time.
This is a good illustration of my contention that BI provides context: what happened, where, as described by existing measures and dimensions. What OLAP can’t do is address more causal and complex business questions like:
- What increases sales?
- What predicts loyalty?
- Who is most likely to be loyal in future?
- What is the loyalty profile of the western region?
- How does it compare to other regions?
Such questions are best tackled by advanced analytics in symbiosis with BI.
Related Analyst First posts:
Industry watchers have been talking up “advanced analytics” for a couple of years now — with no clear indication that the market was ready to follow suit. New market research finds that demand for traditional end-user query, reporting, and analysis technologies continues to outpace demand for advanced analytic technologies.
That’s from Stephen Swoyer at TDWI, commenting on International Data Corp.’s (IDC) recent Worldwide Business Intelligence Tools 2010 study. The study finds that demand for advanced analytics is growing in absolute terms, but that query, reporting and analysis is growing at a faster rate. This supports my contention that query and reporting basics remain a problem for organisations of all sizes.
According to IDC:
“The [advanced analytics market] continues to be dominated by SAS and IBM — which combined hold a 51.4 percent market share — and is therefore more strongly influenced by the performance of just these two vendors,” writes analyst Dan Vesset in the IDC report.
In BI, the post-consolidation megavendors rule the roost:
“The largest of IT companies continue to dominate the BI tools market and to consolidate market share,” writes Vesset, who notes that large IT companies such as IBM Corp., Oracle Corp., and SAP AG — among others — now control more than three-quarters (75.3 percent) of the entire BI market.
It would be interesting to know what proportion of these sales are pure BI or driven primarily by BI requirements, versus BI having been bundled with other software and/or hardware. There is some suggestion in the figures that the shopping cart model accounts for a good deal of the growth. BI grew by 11.4 percent in 2010 but only by 2 percent in 2009. It’s plausible that it took until 2010 for the acquisitions of Hyperion, Business Objects, and Cognos to find their feet in the larger sales machines of Oracle, SAP, and IBM.
Rounding out the market, the “BI-only vendors such as MicroStrategy Inc., SAS, and Information Builders Inc. (IBI) have fortified their markets”, and the emerging pure-play vendors “such as QlikTech International AB, Tableau Software, and Panorama Software have continued to outpace the market, growing at a rate several times that of the BI market as a whole.”
All of this reads consistently with the recent Dresner Advisory Services study.
It needs to be remembered that these measures of market share are fundamentally incomplete. They relate only to the commercial market and exclude commodity and open source software. I’ve pointed out previously that this is an understatement of how much Business Analytics activity is going on inside organisations. It’s only a software view, and even then it ignores a lot of the software actually used by analysts.
In interpreting these commercial trends it’s worth understanding the interrelationship between BI and advanced analytics, as well as the nature of the hype cycle.
I would expect an increase in attempts at advanced analytics to drive up BI. BI and advanced analytics are symbiotic. In consulting, my rule of thumb is that every one part of advanced analytics means five parts of BI. BI provides context. A predictive model that scores customers for their likelihood of future churn isn’t going to be valued unless historical churn and its revenue impact are also being reported. Nor is a better statistical forecast going to be appreciated unless actual time series monitoring and appropriate forecast error measurement are in place.
The persistence of query and reporting needs should also be reconciled with the high failure rate of BI initiatives. Successful BI isn’t easy. Most organisations try it more than once, and the default method of attempting it – not always for good reasons – involves purchasing new software. In this context, the hype that Swoyer mentions plays a role. Query and reporting initiatives which successfully affiliate themselves with emerging technology trends are given new life. Organisations get second and third chances to get BI right – under the moniker of being innovative. This may explain the degree to which the industry buzz around advanced analytics has exceeded market performance. It also explains why enterprise search, mobile platforms, big data, social media, and cloud computing are likely to be invoked in the context of contemporary BI initiatives.
Related Analyst First posts:
- The perennial problem
- Measuring the Business Analytics software market
- *Snapshot on the Business Intelligence Market*
- *BI Vendors: Eat Your Own Dog Food*
- Data is a spellword
- Solution buying
- Paying for software is buying insurance
- The Big Difference
Respondents to the survey indicated that they want BI applications that can easily integrate with major enterprise applications from the likes of Oracle and SAP. Do smaller or niche BI software vendors have a track record of problems when it comes to integrating with such applications?
[Rick] Sherman: You know, it’s funny if you think about 10 years ago versus now. Ten years ago, the smaller vendors didn’t have access to — and there wasn’t as much knowledge as to what was in — SAP or Oracle apps. But, especially with services and SOA and everything else coming out, the ability to access the enterprise applications has gotten easier and easier. So I really don’t think that’s as big an issue today.
That’s from SearchBusinessAnalytics.com on the results of their March 2011 survey. The interview is with Rick Sherman, “the founder of Athena IT Solutions, a Stow, Mass.-based firm that provides data warehouse and business intelligence consulting, training and vendor services.” Sherman speculates on what might have caused the increase in concerns about data integration problems which the survey brought to light:
I think what’s new is the fact that the [data integration] issues are more visible and more people have access to BI or are trying to do reporting, analytics and BI than ever before. It isn’t that the problems are new. It’s that they’re more visible because more people are encountering them. The other point [from] a business user context [is that they are] initially trying to do reporting and analysis from an existing operational application [and it's just one] source. [There] might be data quality issues, but they do not have to integrate data, because they’re getting it from one source. As soon as you start doing that, you start needing to get data from other applications, and that’s when you start encountering more data quality issues and then more data integration issues.
The distinction between data access, data integration, and data quality is a useful one. Sherman is arguing that data access is a problem of the past, and that the real contemporary challenges are in data integration, not access.
I think that what happens [is that at] the larger firms you have SAP and then you have all the enterprise apps that Oracle has acquired over the years. You’ve got two major spheres of application knowledge that you have to have. But when you get down to the SMBs [small and medium-sized businesses], there are hundreds of enterprise apps, such as financial apps geared toward smaller firms and, more importantly, you start getting into industry-focused applications. [The challenge for SMBs is that they] have a lot more BI apps that might not have as much knowledge as to how to access their [enterprise apps] or those enterprise apps might not be as open as SAP and Oracle are. SMBs also have the issue of figuring out how to integrate with a lot more sources than if you’re talking about larger firms.
But, seemingly at odds with this, survey respondents indicated “that they want BI applications that can easily integrate with major enterprise applications from the likes of Oracle and SAP.” And reading that closely, what they mean by “integrate with” is “access”. So what’s going on?
Sherman addresses the problem in terms of technology (tools) and know-how (knowledge). In fact, there are at least two more dimensions to it. One is complexity and the other is politics.
Yesterday’s post challenged the assumption that the market growth of emerging BI vendors must be coming from small to medium businesses, pointing out that it might also be coming from autonomous business units within larger organisations. Sherman appears to be making a similar conflation – implying, because larger organisations run enterprise systems from Oracle and SAP, that they don’t have anything else. It’s certainly the case that, as he says, smaller businesses run niche applications geared towards their needs (and budgets), but it simply doesn’t follow that large businesses only run enterprise applications. I would expect the opposite to be true. Large organisations become large in part through acquisition, and large organisations have more moving parts. Large organisations are more complex. I would expect them to be running more systems, not less, many of them niche, heavily customised, and developed in-house. Sherman’s larger point is well taken: less common systems are harder to access, and the more systems in place, the harder the integration effort. But there’s no reason to think this problem has gone away for large organisations because they’ve purchased enterprise apps.
Enterprise systems are more IT-reliant, meaning more organisational functions between users and data, increased layers of internal policy compliance, additional bureaucracy, depersonalised communication channels, more dispersed knowledge, and as two previous posts have argued, divergent incentives.
The common thread running through all of this is that business systems have been consistently poor at making their data available to analysts in manipulable form. Routine access to verbose data – native, unsummarised and readily tractable – is a perennial problem. Verbose data at its most basic doesn’t need to be clean or integrated, just available. That is all many analysts need.
In technical terms this translates into a query and reporting deficit. Over the last decade or so I’ve watched as the same elemental query and reporting needs have piggybacked on a succession of ‘sexier’ requirement sets, such as:
- Business Intelligence
- KPIs, Dashboards and Scorecards
- Performance Management
- Business Analytics
Also attaching to various parallel technological trends:
- Enterprise Search
- Web 2.0
- Mobile Platforms
- Cloud Computing
- Social Media
- Big Data
It’s furthermore the case that query and reporting needs repeatedly merge themselves with the related but different objectives of various data management and infrastructural projects, for example:
- Data Warehousing
- Master Data Management
- Data Quality
- Data Governance
None of this takes anything away from any of the above disciplines, each of which tackles real and distinct problems. The point is simply that basic query and reporting remains a problem.
Related Analyst First posts:
About usAnalyst First is a new approach to analytics, where tools take a far less important place than the people who perform, manage, request and envision analytics, while analytics is seen as a non-repetitive, exploratory and creative process where the outcome is not known at the start, and only a fraction of efforts are expected to result in success. This is in contrast with a common perception of analytics as IT and process.
Tags in a CloudAIPIO analyst first Analyst First Chapters analytics analytics is not IT arms race environments big data business analytics business intelligence cargo cults collective forecasting commodity and open source tools complexity data decision automation decision support educated buyer EMC-greenplum forecasting HBR holy trinity human infrastructure incentives intelligence model of analytics investing in data lean startup literacy management culture MBAnalytics operational analytics organisational-political considerations Philip Russom Philip Tetlock prediction markets presales R Robin Hanson Strategic Analytics tacit data TDWI Tom Davenport uncertainty uneducated buyer vendors why analyst first