Klaus Felsche is the Director of Intent Management & Analytics for the Australian Department of Immigration and Citizenship (DIAC). We sat down recently to talk about the evolution of Analytics at DIAC, how DIAC’S Analytics initiatives are managed, and the composition of its Analytics team.

ANALYTICS AT DIAC

SS:  How did you first become aware of Analytics?

KF: In 2009 I was asked to reshape my job in the Border Security Division at the behest of the Chief Information Officer, who wanted to have an Analytics capability in DIAC. I had to Google “Analytics” to find out what that was supposed to mean, which was less satisfactory than perhaps it should have been. There was a move across government and amongst various CIOs and other interested players, along with a big push from industry, to go and get a hold of Analytics—whatever it was at the time—in order to try to inform decision making in a more constructive way than had been possible before. So we didn’t at that point have a detailed understanding of what it was and what it involved, but we had a reasonable sense that it might be a good thing.

SS: Where did you go—past those initial Google searches?

KF: I started doing the exploration and following the trails and found a lot of dead ends. I also found that there were some large multinational suppliers to go and do what was called “Analytics” work, and I guess at that stage they were limited in what we could expect from them in a number of different ways. For example, I don’t think we provided much good business input into what these organisations were seeking to do because we didn’t understand our problems in analytic terms. They didn’t seem to understand our business as well as we thought they should—or perhaps as well as they should have. They offered some fairly simplistic solutions, which were much more traditional business intelligence focused than Analytics focused. From our point of view, the whole thing was about identifying potential risk to our various business lines—that’s what Analytics was really aimed at. Following those lines through, I was fortunate that I then had access to people like Warwick Graco from Tax [the Australian Taxation Office (ATO)]. Warwick was one of the first people to help us distill our own way of thinking about what Analytics meant in a large government organisation and how it would be operationalised. As an aside, I had worked with Warwick in the 1980s when we were both serving at the Australian Army’s Officer Cadet School. And it was through that, and listening to Eugene [Dubossarsky] at one of the ATO Community of Practice sessions in late 2009, early 2010, which Tax kindly invited us to, that we started to form our own definition and identify what we could do in terms of immigration business.

We also spoke to some of the banks, and the odd insurance company, and a number of organisations along those lines to get some baseline comparisons. At the same time, of course, once we started asking questions, word got out to the IT industry, which started to push quite hard to capture a portion of our business. I was quite fortunate in that we had no budget to speak of, so I was relatively immune from making foolish business decisions.

So we had started to form our ideas, and then it was a case of fitting into some of the organisational transformation processes. In 2010, the Risk, Fraud and Integrity Division was formed to provide risk-based services to the rest of the organisation. When I was in the Border Security Division, the focus was usually on border security and border transactions, and much of the visa caseload was secondary. Risk, Fraud and Integrity Division allowed us to look across all of the organisation. We then ran a large workshop at the Deputy Secretary level between Australian agencies and New Zealand participants also joined us. Graham Williams from Tax was asked to present because we identified him as someone that clearly knew what he was talking about. All of that laid the foundation for what we’re doing now.

SS: What did an Analytics capability mean then? How did you and others picture it?

KF: The traditional view was it was something to do with IT but providing a business service. Some had the sense that an Analytics capability was an additional service provided by our Business Intelligence platforms. The Chief Information Officer at the time viewed it as an IT-enabled service but he didn’t give it to the IT people. He gave it to the business side to progress, which to me demonstrated that he understood it. I think there was a deliberate attempt to try to get a more open approach to looking at data and what it could do to inform the organisation. The CIO was very clear that we had to be focused on risk identification in the visa caseload.

SS: How was the upside pictured? Not letting higher cases through? Being much more efficient in processing lower risks? Gaining new intelligence? Having an evidence base for making decisions?

KF: At some level, all of those. But I think the main business driver was efficiency: identifying risks and then aligning differentiated treatments appropriate to treat them. Some aspects of which are counter-intuitive if you start looking at the detail. A high risk case may actually require minimal treatment because if you can identify it the decision is relatively simple. You don’t give the person a visa if they represent high risk. The decision can be made in a few seconds. The same is true of low risk. If you can’t see much risk then the processing is also relatively straightforward. The hard part of the caseload is usually the large component in the middle. That is actually the intensive part of visa processing. So very quickly in our area we started focusing on trying to identify as clearly as we could the likelihood of particular risks occurring in the caseload. That is really what has been our driver.

A big issue for us was cultural change. If I go home to watch a football game on television, and I turn the television on and get the right channel, and the sound is OK, and I can see the picture—then I really don’t question the wiring behind the screen. I am looking at performance, and I can see an outcome which convinces me either that I bought a good television set or a rotten one. In this department, there is a legal construct called the delegate who makes decisions on behalf of the minister. An officer will make a visa decision or a border decision effectively as the Minister’s delegate, and it is that person who needs to be satisfied that a decision is valid and complies with the law and policy settings. Many officers are  not prepared to make decisions unless they have personally checked all aspects even if an analytics process informs them that risks are negligible, nor are they prepared to believe a system that says “Trust me, you don’t have to do the full gamut of processing”. So we naturally get pushback from the individual, who says, “Well I am the one who is the delegate, not you. I will decide when I’m satisfied that an application meets requirements, not a machine. When you present a complex means of identifying what could be a streamlineable case because it has almost no visible risk you may find that the pushback from the processing officer is: “How do I know? What happens when it goes wrong? Who’s going to get kicked at the end of the day for making a wrong decision?” If officers assume it is going to be them, then they will insist on doing full processing anyway, regardless of whether or not a case has been identified as low risk.

So there is a lot of cultural change that has to occur before you can trust the process. Going back to the plasma screen example, what we actually have to do is establish that the system is producing results that are reliable. That’s evidence. But people have to actually see that the results are reliable to develop trust in the system. Only then will they start believing it. Until we can demonstrate that, people won’t trust the ‘black box’. They will simply try to second-guess it. With Analytics, particularly using things like predictive modelling, nothing frightens people more than saying, “Trust me. The algorithm running in this little machine will decide whether or not you’re going to get sacked for making a poor decision”.

There’s a real tension here. If the predictive rules you discover through Analytics conform to existing practices then you can embed them in an IT system and they will be accepted. The problem with that is, you’re not a making any advancement towards more efficient processing because you are basically embedding commonly understood business rules that were being applied anyway. You may have standardised, but you’re not looking at anything new.

SS: Was the cultural challenge something you anticipated? Did it come as a surprise?

KF: No, it didn’t. The volume was a bit of a surprise. The difficulty of ‘selling Analytics’ was the gist of Eugene’s presentation two-and-a-half years ago now. In that presentation he covered the issues of how you get corporate buy-in—not just the top level, but other levels. So we knew that it was coming. What we had underestimated was the severity of the pushback. A single anecdotal negative outcome can put a lot of pressure on an Analytics area. You can have 99 out of 100 things working really well and that will be just expected. One bad case will negatively colour the whole enterprise if you let it.

At the same time, the challenge we had was that this was a brand new field. We had to demonstrate potential value very quickly. We used examples that made sense to everybody up and down the line to prove our capabilities. A very early piece of work that we did on some travel data identified that at a particular point in time in 2006, that anybody with a Belgian passport trying to get on a Cathay Pacific flight in Hong Kong and wanting to fly to Brisbane or Perth was 80% likely to be an imposter. This had never been highlighted in that way before, and we demonstrated it just through a simple decision tree, but it was enough to highlight to people just what the power of this particular approach was. It was a data-derived pattern. I remember going to the Identity Branch and saying, “Guess what we found?” And they said, “Yes, we knew that. They were all imposters.” Only a few people at the time would have been aware of this. There were other similar patterns, still invisible to the organisation. It was an interesting exercise in selling the power of the concept. It was a really important part of the process because that decision tree went up all the way to the senior executive as an example of what could be done. This then gave us the permission to go and explore further.

Some of these problems still persist and will persist for years to come. There will be natural organisational reluctance to adopt some new approaches for not necessarily logical reasons that are nonetheless understandable within cultural and organisational constructs that exist. For example, if I use the word “efficiency” some will start counting the bodies they’re going to lose as an ‘efficiency dividend’. We have to get on the front foot, pointing out that in four years’ time there will be a million more visas to process given current growth rates. How are we going to do that with current methods and our current processes? Analytics offers a solution that can help you manage the growing caseload without a loss of integrity.

APPROACH TO ANALYTICS

We have developed a project methodology to develop and deploy innovative approaches, but it’s a non-standard methodology. Our process starts off in the lab. We first have to convince ourselves that something is worth selling. So we want evidence. We want the confidence that whatever we are going to be building is actually going to yield a positive outcome. Once we’ve done that for our own purposes and signed ourselves up to the concept we can then start exposing it through a test environment to the business side of the house that we’re trying to service, but at very limited scale. The main purpose of that is to build more evidence, but to build it closer to the business, and to expose the business people involved in the testing environment to the potential that is offered. If it passes that particular stage then we can move into a proof of concept or prototype that is going to bring it very close to the front line but not in a full blown production environment, so that we can convince more business delivery people, get more information back about how it is going to be consumed, and still have the flexibility to tweak and change our view of how certain things should happen. Once we’ve got all of that boxed up we’ve effectively done the design and development stages of any project and we’re ready to put a really strong business case to the organisation which says: this thing works; it has the following limitations; the following people like it; the following people hate it; and it’s going to cost this much money to turn into an enterprise solution.

That’s quite different to the usual process that relies on coming up with all the answers up front. Doing design and development, in theory, up front, for something that nobody really understands—even the people who are pushing the project—then doing all of that engineering conceptually, again up front, and then in the last three months of a project trying to do the coding and the delivery and the socialisation on the ground. Then we may find out that it doesn’t work properly, and it doesn’t do what it is supposed to do, and people hate it because it is not performing, and you’ve switched the other things off that they were using before because you need the money to pay for all of the new stuff.

We’ve been avoiding that by using a gradual incremental process. The business, even in the lab stage, is aware of what we are doing. We involve the business directly as soon as something is let loose out of the lab so we can pick up ‘cultural bits’. Introducing it early means gathering a lot more information and feedback about what sells and doesn’t sell in our business environment. There’s another layer to this too. If you can get analysts and other team members who understand the business part of the world and the capabilities that your Analytics processes can enable—if you can get that crossover happening—then you start to think of brand new ways of doing the business, brand new things that you can do that were never envisaged at the start of the project. That’s what we have tried to do and it’s been reasonably successful in some areas. It may not fit every type of project, but it certainly does fit the Analytics environment in this Department.

SS: So in that context the Analytics function is at least in part an ongoing R&D shop.

KF: There are two components of our Analytics process. There is what I call the bread and butter Analytics work. Once we have established a dependency on Analytics for a business process we have a responsibility for maintaining that function. So when we produce predictive models for visa systems or border systems, effectively we go into our production environment. These models need monitoring, they need refreshing, they need supplementation, refining. New technologies become available and new systems approaches get brought in, but it’s really a production house. Effectively, that should be around about 80 percent in my view. Within that a lot of innovation is still possible, but if I don’t produce a model that supports a new visa class that’s coming online, then we’re not doing our job. If we get that bit right, then the other 20 percent of the time is the stuff of innovation where we can actually say, “Well, we’ve never looked at this part of the business before. Is there a new way of actually addressing this question or of addressing this risk?” So I think Analytics is both: R&D and Production.

SS: You’ve contrasted the incremental with the big bang approach to systems development, and couched that in terms of uncertainty. How have you thought about software?

KF: If you imagine a simple curve that traces at what stage you are in a systems development cycle—lab, test, prototype, production—making mistakes in the lab is low cost. You can afford to back a few wrong horses. You effectively have a bit of room to try out some thinking, and you can afford to fail, because in your lab environment the cost structures are, generally speaking, quite low. The main cost there is people unless you make the wrong decisions about platforms and processes and potential software solutions. At the other end of the scale, if you make a mistake in a production environment that has to process 40,000 people a day, one mistake can quickly become very expensive—you know the old line about a computer enabling you to make as many mistakes in a couple of seconds as you used to take a whole year to make manually. And the costs are commensurate with that. So if we get a production system wrong, and it doesn’t deliver, the costs are extremely high to the organisation, both in terms of what people think about the organisation’s competency to manage a process, and also in terms of infrastructure and other investments: people, systems, IT solutions, and so on.

One of the strategies that we have tried to use is to minimise the risk to the budget by presenting to the department a low risk model that allows us to get things wrong. If I had, for example, signed up with a software vendor that was going to cost me millions of dollars per solution—CPU dependent, volume dependent, data volume dependent, whatever—it would have severely limited our ability to get things wrong. Everything that we got wrong would have had a high price tag on it. Luckily, in Analytics, there is a fair amount of competition.

In a more traditional IT environment, utilising something like the waterfall project method, even starting a project requires pre-allocation of sizable resources. This effectively stops staff from trying out new processes or ideas as startup costs are high and the chance of failure (to deliver) exists in sufficient quantity.

One of the key factors in the Analytics space is the range of open source work being done that is actually, in some respects, more powerful than what some commercial vendors could ever provide. The concept of having a Research and Development department of 50,000 is beyond even the largest companies of the world but effectively that is what open source is. It is not a well-controlled R&D department, but maybe you don’t want that sort of control in R&D anyway. What we’ve found in the Analytics space is that the open source solutions have allowed us to do things that otherwise we could never have even looked at seriously if we’d had to pay the price of a high price commercial solution. Just the concept of having to go out to tender to test something to do a particular thing is a total deterrent to trying it. We might want to do, for example, some entity extraction work. There are a number of commercial and open source software solutions around. If I go for anything I have to spend money on, it’s a formal procurement process. I have to go out to tender or similar processes. The costs are not trivial—both for us and potential providers—and there is usually a significant time lag. Open source allows us to go and try things straight away. It may not be the best solution around ultimately, but it enables us to build some confidence around the process, and then—if we have enough confidence around the process, and the open source software turns out not to be sufficiently robust for one of the next stages—at least we have the evidence base that the concept and the process work to a particular level of certainty. So we are not actually deterred from trying. That’s what open source has managed to do for us with Analytics.

SS: You’re seeking to delay any large expenditures until the point that your uncertainty has been commensurately reduced, which being able to make mistakes in a lab environment, without going out to market, enables you to do.

KF: It’s a case for value for money: what value can you demonstrate empirically that you get for your money. So, if you get to a particular point at which the open source or the free or the home-built solution can no longer cope, you’ve got lots of evidence that the process has value. Then you can do a decent cost/benefit analysis of potential tenderers with commercial software—plus you’ve got a really good benchmark expressed in terms of capabilities and functionality that you can use to filter other solutions against.

Some government organisations end up going through those processes and some build their own because there is nothing in the commercial sector that will do the job. That’s fine. But don’t forget the $200 software package that might just do the job. Perhaps not to 100 percent of the capability, but when you want to get to 100 percent performance and you’ve got to pay two million versus two hundred dollars, you can do the cost/benefit analysis. Using this approach shifts a great deal of risk to us. It is much more comfortable having a large vendor accepting all responsibility for maintenance, problem fixing, etc, rather than them having to do it yourself.

In a lab environment you are insulated. You are not impacting the general IT environment. Even in the test environment you can separate most of your functionality out from the normal IT environment. There are few risks to our standard data platforms and systems. Our IT people are responsible for the safekeeping of data and the reliable provision of very time-critical and important client services. If you introduce something into that environment that creates hazards then they are quite justified in protecting the systems. This is doubly true in complex data environments where multiple systems could be affected by a rogue process. The tests we run in the lab environments allow us to demonstrate what the potential risks to the rest of the environments are through our work. And again, that builds confidence—should build the confidence—with the IT organisation that by introducing this piece of software you are not going to damage a whole enterprise solution. There are lots of advantages to using the low cost, low impact, high gain, lab/test environment to build an evidence base across all the dimensions Analytics has to address.

ANALYSTS

SS: What do you look for in an analyst to fit into that environment?

KF: The first thing is, obviously, that the analyst must have the technical skills that are required to work in that environment. At the start, if you are not a technical expert in the field, as I am not….

SS: The bamboozlement risk is high…

KF: In the public service environment, once you made the commitment to hire, it takes you three to six months to bring somebody on board. If you make a mistake, you haven’t got the capability you thought you hired and you now have a staff member who is probably as unhappy as you are about the job they find themselves in. My preferred method is to go out and contract in some skills to establish a baseline. The risk is relatively low in making a decision about a contractor because if they don’t perform, the contract can be terminated.

SS: It’s the open source model, but with people.

KF: When you’re at the start of the process and you don’t really have an ability to make a judgement about what is good and what isn’t good, I think the net risk with contractors is a lot lower. The other things I expect from contractors now, knowing what I now know, is that they have the ability to communicate and that they are be prepared to go and identify business problems, take ownership of business problems and look for business solutions, rather than being a technician who sits there waiting to be told to do something by somebody. So they have to be able to communicate both ways. They’ve got to be able to understand what the business problem is. They have got to be able to communicate things that are important to the business, which includes bad news and good news. That’s not going to happen overnight, and you can’t hire it at the start of a process, so you have to be prepared to invest in the individual in order to allow them to learn the context.

There are some that will probably direct their skills at a variety of problems which are very important, but which can be solved with some fairly basic skills. ‘Basic’ is not derogatory. It’s actually a lot more advanced than I would have, but they’re not super-skilled, although perhaps we can provide a path for them to become that. In our context I like a national approach. We are doing a lot of work developing our analyst network at the moment. The main reason I want it is that there are some smart people sitting in Perth and Brisbane who we might want to tap into, and in this particular age there is no reason why in a globally networked environment they can’t be working on one problem all across Australia.

Then there are starter analysts—people that provide pretty much a refined business intelligence capability and have the fundamental skills of reading and manipulating data. They’re doing things with data that are actually quite sophisticated, but they don’t require a Ph.D. in statistics. They need to be able to drive the toolset, and they’ve got to produce answers that are reliable and valid.

Then of course there are more sophisticated analysts who can effectively take an intellectual lead.

In any organisation like DIAC, you will always have a bleed out of some of these skills either to other parts of the economy, or even to other parts of this organisation. Traditionally, we cry a lot and complain that there’s a loss of skills, and the reason for that is that we don’t yet have an active process in place that grows those skills in house, so that when you lose your ‘left winger’, there is a young guy in the junior team that is ready to move in. That’s the sort of thing we need to do more of. That said, bleeding out is not necessarily bad, because they go to other business areas that are then better informed about what they can expect from an Analytics process.

SS: Cycling back to where we started, 3 years ago, with a CIO deciding that he would like an Analytics function, how has the level of understanding of Analytics and what it can deliver for DIAC changed over that period at the executive level?

KF: The ability to show relevance and quick wins to the business, has at the very senior executive level certainly given us the opportunity to continue for the time being. We are being relied upon increasingly to produce business outcomes that are going to help in a more constrained resource environment. Our executive is also aware that we can see things that we’re never been able to see before. I think all of those things, if they are communicated appropriately, are good things. If the Secretary—and he has done this—can pop down and speak to an operator at Sydney Airport who says, “Boss, this is really good. I don’t know what the math is, but it is pointing to the right people,” then the message has obviously gotten though at all levels.

SS: Do you have a strong view about where an Analytics function belongs with respect to the IT function, or with respect to the organisational divisions that it’s supporting, or on centralised versus embedded arrangements?

Analytics sits in the business because it asks and answers business questions. It finds solutions to business problems. It is not an IT part of the world. It’s not a housing or a building service. It is a business process as far as I can see.

Ultimately the sort of people we want to attract to this area are business people who are outwards looking and constantly searching for what we could do for DIAC, even though the business itself may not have identified a problem or an issue or an opportunity. The people in the Analytics function need to actually be in contact with business areas and be prepared to say, “Have we got a deal for you,” and the deal should not be presented as “You can reduce staff numbers if you let me put a little grey box in the corner”. That’s not a good deal. You’ve really got to understand the business quite well to be able to say, “We may be able to do something for you. We need your support to try it out. Here’s the idea. Into the lab it goes, and we’ll let you know how the experiment turns out, and whether it works or not we will let you know.”

Related Analyst First posts:

 

2 Responses to Klaus Felsche on Analytics at DIAC

  1. Aileen says:

    http://www3.cfo.com/article/2012/4/analytics_business-intelligence-next-generation-analytics#.T4rlAf4DEpc.twitter

    Analytics | April 15, 2012 | CFO Magazine
    Putting People First
    Analytic tools are only as good as the people who use them.
    by David Rosenbaum

  2. [...] tracking and profiling office (his actual title — we are not making this up — is “Director Intent Management & Analytics“) gave an  unusually revealing presentation (PDF) [also here] about the nature of the [...]

Set your Twitter account name in your settings to use the TwitterBar Section.