Sunday, 17 January 2016

Business value, Data Discovery vs BI

Working in pre-sales for Platfora I spend most of my time working in that murky often misunderstood no mans land between IT and business users.  Don't get me wrong a lot of organisations really have worked out how to get this relationship right, but many have a long way to go.  These organisations that have reached the higher functional levels are generally those that recognise that any information processing excise is a means to and end, not an end in itself.  What I want to explore here is the impact of what that means and how you can move forward and achieve a higher functional level - and at the heart of this is the subtle difference between Business Intelligence and Data Discovery.

Silos and war-zones

Most organisations, large or small, often make the mistake of seeing the delivery of a system capable of creating a parameterised report, pivot table or pretty dashboard as being a success.  For many the mere delivery of these systems has been a long and bitter struggle.  To really critically examine these systems you have to ask what value do they deliver? 

In many cases a they will deliver a piece of information to a group of users, "Whats the backlog in my call centre?",  "how many visitors did my website have?"  while these are all of some value, in almost all circumstances they deliver little or no business value.  To deliver business value a system has to be able to adjust or inform the business in a way that allow it to change its behaviour to save money, increase revenue or avoid regulatory censure.  Where these actions add most value is by aligning with the unique selling point or competitive advantage that the organisation is strategically targeting.  

If an organisation has targeted its USP on the quality of its customer service, the call centre waiting time is critical, but just informing on what that waiting time is adds no value, is is the action that is then taken that adds the value.  A remedial action can be taken - bring in more all centre staff, or an attempt can be made to find the route cause and fix that, "do we have a services problem?".  The remedial actions are the levers that management has to control business processes, but to effectively use these and not just react to an issue and treat the symptoms, its important to give the responsible management the tools to discover the root cause.

While a BI solution can be of some use here, there is major flaw.  In almost all circumstances you need to know ahead of time what the nature of the cause might be, allowing a system to process the data to create a metric that can then be used in decision making.  For closed problems where the number of possible causes is fixed or limited this can work.  However for open or unlimited problems this can be futile, and in fact the mere effort of analysing the problem can cause analysis paralysis resulting in the system never being built in the first place.  

Solutions to some of these issue started to appear in the mid zeros and became mainstream about 4 years ago, in the form of Data Discovery platforms.  These are not the right solution for all information problems, for example if I was creating a solution to monitize  data or be a recommendation engine, I'd be developing code.  If I wanted to forecast the weather I'd be developing code.   But if I wanted to know "why" am I seeing something and I had a good range of data feeds which encompass the problem I'd be reaching for a Data Discovery tool.

Why Data Discovery?

The value of a Data Discovery tool is that it allows you to take a wide range of data sources, some raw, some pre-processed, combine them freely and perform transformations and calculations to produce an analysis.  Sometimes these can be just numerical or visual analysis or enhanced by adding statistical modelling for groupings and classifications.  These are tools that allow users to explore and "see" there data.  The biggest change to this category of tools has been enabled by the arrival of Big Data - but very few tools are genuinely exploiting the potential of the capability.  

Many of the first generation Data Discovery tools only implement connectivity to Big Data solutions, typically via a SQL interface potentially with some sort of accelerator such as a pre-aggregation engine or cache, while this allows them to read data out of a Big Data source, its not really enabling Data Discovery in Big Data.  Another solution is needed to prepare the data and make it available and set the security model, nearly always an IT function.  

So while these solutions connect to Big Data, they themselves are no more capable of scaling than they were - still being limited to the data that you can fit in a single machine. In effect to use them you have to make Big Data small, and crucially this must happen before the data comes into the Data Discovery tool.  So while these tools make some grand claims for enabling Big Data Discovery the reality is rather more limited -  it may look, feel and smell like your favourite Data Discovery tool, but its functionality has been reduced to that of a BI reporting tool.

To be really able to claim to deliver the nirvana of Data Discovery in Big Data a solution has to not just connect to, but actually utilise the storage and processing capabilities of the Big Data platform to enable the blending and calculations to take place in the platform at scale.  In Hadoop terms this means using the storage and the native scalable processing APIs such as MapReduce and Spark and enabling business users not familiar with coding to use them.

Options?

So at this point we've reached the conclusion that to add real business value needs Data Discovery, and to deliver business value at scale requires Big Data Discovery.  So what are the options?  The core capabilities of Data Discovery are the ability to combine and transform data in any way required and to rapidly visualise this data.   If you take those as the requirements, there is really only one option that delivers the scale and agility to deliver business value.  Platfora is the leading platform for Data Discovery on Big data, and its quite clear why.  Its not trying to take a legacy product, and perhaps a compromised ROLAP engine, and patch it up for Big Data, it is a genuine from the ground up product built for Big Data.  Crucially its not dependent on the IT department for preparing data in Hadoop, but actually places this crucial capability in the hands of the end users - it is utilising the entire Hadoop platform for Data Discovery, along with the massive scale this implies.

Conclusion

There is a quiet revolution happening in the world of information solutions, by implementing Data Discovery solutions in Hadoop such as Platfora, many organisations are now leapfrogging their competitors.  By focusing on the end goal of delivering business value and making the technology fit the high level business processes rather than delivering intermediate solutions that themselves feed into numerous other processes, Platfora customers are increasing their agility and competitive advantage over their competitors.   When IT deliver platforms that enable Business users to genuinely self-serve without IT intervention then the business users keep going back for more and you develop a mature highly functional organisation.  

As for everyone else?  Many are now beginning to see the light, but some are determined to keep banging the rocks together, ably assisted by the snake oil sellers taking legacy products and adding endless fixes and work arounds to make something appear to be Big Data.  Time will tell which is the right way to go, I'm pretty confident which way it will go.










Sunday, 5 October 2014

Embedded Pentaho Data Integration - Raspberry Pi

Embedded Pentaho Data Integration - Raspberry Pi

This week I was asked if it was possible to run a Pentaho Data Integration transform in an embedded use case. My initial thoughts were that I didn't know, but I didn't see why not. Best try it out.

In the UK the Raspberry Pi is a very popular, readily available embedded platform that's used in all sorts of fun hacky projects.  Costing about £25 ($40) it's also pretty cheap.  It has a 700 MHz ARM 11 based CPU  and 512MB RAM, so it's no powerhouse, but should be enough.


The board itself is about the size of a credit card and comes with a good selection of ports, HDMI, 4x USB, ethernet and a mini USB for power.  It also has a 40 pin GPIO (General Purpose Inout Output) connector that has a wide range of possibilities.  

The board can be supplied with a 8GB SSD that comes with a collection of operating systems and can then be used for storage and booting.  

To get started I installed Raspbian which is a Debian clone optimized for the PI.  Installing took a couple of minutes and the OS booted, initially I was connected to a monitor and keyboard just to get the setup done.  Once the initial setup was complete and I was on the network, I just had to enable SSH then login over the network.  After this point I dispensed with the keyboard, mouse and monitor.  

I obviously wasn't going to run Spoon, the transformation development environment, but my objective was to see if I could run "a" data transformation on the platform. One way to achieve this was to run a Carte server, this allows you to connect remotely and run transformations.  

The Carte server can be copied over from the data integration design tool folder,  and to my utter amazement with a couple of bugs and errors in the console (some X11 related, could be connected with running headless) the server started up first time.  (I know that's supposed to happen with Java, but still).  


So the next part was to create an ultra simple transformation just to show that things work!  
So this just creates a few rows, waits 10 seconds between each row, gets some system information and writes it to the log, virtually pointless but proves the use case non the less. So the next part is to configure the Carte server, View Tab->Slave Server->New and enter the config.
All configured, now just select run the transform and execute remotely selecting your new slave config, and off it goes.



Just to be sure that it was executing on the Raspberry, here is the console output on the Pi.
So it works, whats next?  That's where the fun can begin, there are a huge range of applications of what this could enable, and additionally lots of options on how to communicate with the remote devices and make them as autonomous as possible.  Hopefully I'll find the time to try some of these out!

Monday, 24 March 2014

Real Time CTools Dashboard - Twitter - Part I

Real Time CTools Dashboards  

If you have read any of my previous blogs, you will have noticed that I like the slightly unconventional and challenging.  My last example was a real time dashboard showing the status of the London Underground system using Pentaho Business Analytics Enterprise Edition.  In this post I'm repeating the "Real Time" theme but using CTools for the front end.

I've split the post into multiple sections as it's rather a lot to put in a single post.
  • Part I  Covers the data integration and data sourcing (This post)
  • Part II  Covers the front end.

CTools?  What's that then?

Unless you have been living on another planet for the last 5 years you will surely have come across the great work being produced by WebDetails  and others.  Over the last few months I've been fortunate enough to work with Pedro's Alves and Martins and the rest of the WebDetails team. I've been inspired to see what I could put together using CTools and the other parts of the Pentaho toolset.  This came along at the same time that we were running an internal competition in the sales engineering group to create a CTools dashboard, so with some assistance from Seb and Leo, I created something a little different.


Real time?  No problem!

One of the really powerful features of CTools is the ability to use a Pentaho Data Integration transform as a direct data source.  As PDI can connect to practically anything at all, transform it and output it as a stream of data, it means you can put practically any data source behind a CTools dashboard, MongoDB, Hadoop, Solr or perhaps a Restful API?  Not only that you can use these multiple data sources in a single transformation and blend the results in real time. In effect it's using a tool that had its roots in ETL as an ETR tool, "Extract Transform Report", or to look at it another way, an ultra powerful visual query builder for big data (or any data for that matter).

The first bit is relatively easy, create a search string, get an authentication token, get the results then clean up.  To enrich the data feed a little more I've added in a WEKA scoring model for enhancing the data stream with sentiment analysis.  At this point I've got a raw feed with the text and some details of the tweeters and some sentiment.  To enliven the dashboard, a few aggregates and metrics are needed.  One option is to add additional steps to the transformation to create additional aggregations, but I'd rather just run the one twitter query to create a data set and then work with that.  There is a way to do that...

Querying the CDA cache

This is where we can use a novel approach to get at the results of the last query.  In CTools the results of each query are held in the CDA cache.  It's possible to access the results of the CDA cache using its URL directly:

To get at the URL open the CDA page associated with the dashboard, select the data access object, and look at the query URL.

This URL can then be used as a source in a transformation, in this case I put it in the HTTP Client step:

Then using the JSON parser the individual fields can be split out again:

I can then use a range of sorting, filtering and aggregation steps to create the data views that I want.  Just branching the data flow and copying the data allows you to create multiple views in a single transformation, each of which can be picked separately by a new CDA data source.

That pretty much covers the data integration, at this point I've used PDI as a visual querying tool from a WEB API and the same tool works to query the cached results of the first query - this is powerful stuff.  In addition to this (with virtually no effort at all) I'm doing sentiment analysis on twitter using a WEKA model.  This process could be enhanced by using MongoDB as a repository for longer term storage of each of the search results.  Allowing the possibility of using multiple "iterative" guided searches to build up a results "super-set" that could then be analyzed further.

In the next post I'll talk about building the CTools front end using CDE.  I'll include some rather distinct styling, and some very flashy but ultra simple CSS and Javascript tricks.


Tuesday, 5 November 2013

Agile Real Time BI with Pentaho - Part III

Bringing it all together

This is the third article in a trilogy on using Pentaho Business Analytics to do Business Intelligence on real-time data sources in next to no time.  The first two articles cover using  Pentaho Report Designer reports in combination with Pentaho Data Integration transformations to provide summary and detail level reports in a dashboard.  The details of how to create these are covered in part I and part II.

Lets take a look at what I'm aiming for at the end of this article:
This shows the summary report covered in Part I, the detail level report from Part II, and whats that on the far right?  This is a Pentaho Enterprise Edition Analyzer view running on Mondrian, and in this instance reporting off a Cassandra database being updated every 60 seconds, so that's real-time (well to the nearest 60 seconds) OLAP analysis!

This example is a huge over simplification of what is possible using the technology, but a crucial component here is Pentaho Data Integration's data blending technology.  The outcome of this is that we can use a NoSQL database as a data source for Mondrian via JDBC.

There are three transformations behind this:
  • Read the API and load the data into Cassandra
  • Clear the Mondrian Data cache
  • Read from Cassandra and make the data available over JDBC

Reading the TfL API and putting the results into Cassandra

The first part is to get some data, this reuses the basic API call and XML parsing from Part I, in this case I'm also capturing the status details field which I also want to store in my Cassandra database for possible future enhancement (stay tuned).  I also separate the time and date elements to make the Cassandra Key.  
There is really very little to configure to write the data into Cassandra, just configure details like the column family (think wonky table), specify the fields to use as a key and that is about it. 

Clear the Mondrian Cache

When we make new data available within Cassandra and we're querying via Mondrian we need some way of indicating that the cache needs to be updated.  There are a couple of ways to achieve this, including telling the data model not to cache the data, but in this case I'll take the nuclear option and blow away the cache after loading new data.  This can be done via an API call in version 5 of Pentaho, so I used a HTTP Client step in PDI.
I just used a simple job set to repeat on a 60 second loop to run these two transformations.  

Now for the Magic

So how do we use a traditionally SQL based technology on a NoSQL database?  This is where we can use one of the new features of Pentaho Business Analytics 5.0, data blending.  In effect what this allows a developer to do is create a data transformation using the steps available in PDI and make the resulting data stream available for SQL querying via a thin JDBC driver.  The transformation is stored in the enterprise repository and the data made available via the Data Integration Server, details are available in the Pentaho Wiki on how to do this.  The transformation in this instance could not be simpler, read the data with a Cassandra input step:
add a constant of 1 as the "count" of the number of entries to aggregate up in Mondrian, and then use a step to act as the "output" in this case a select values step.  
The final step in the transformation is to use the Data Service tab in the transformation settings.  This associates a data service name with the data stream from a transformation step.

The easiest way to get at the data is to use the data source wizard in the Pentaho User Console, but before doing this you need to create a new JDBC connection (you can do this in one step, it;s just cleaner to explain in two).   The JDBC driver needs to use the custom driver type, and a custom connection URL.

For the custom connection URL I used:
jdbc:pdi://localhost:9080/kettle?webappname=pentaho-di
and for the driver:
org.pentaho.di.core.jdbc.ThinDriver
The final step was to create a new data source in the data source wizard, I created a simple analysis view and then added a very quick and easy stack column visualization on an analyzer report.  Add the new report into the dashboard created in Part I, set the refresh interval as appropriate and you're done.

So that's it!  Real time BI with a combination of reports with drill down and real time OLAP style analytic reporting.

Where Next?

This has been an interesting quick and easy experiment but has by no means reached the limit of whats possible.  There are three paths that I might pursue in future posts, advanced visualizations, text search or predictive analytics.  The visualization options could by something like a Force Directed Layout showing the stations and relative locations of the trains as a network graph.  Another alternative would be a version of the Live London Underground map.  The text search could be an interesting option to search for particular issues or performance problems, possibly linked with a twitter search?  Finally another option is to use WEKA to do some predictive analytics on historic data and influencing factors on performance, and build a predictive engine to answer questions such as "It's raining, it's 08:15 on a Monday morning I'm at Kings Cross and I need to get to Angel, should I just walk?".  The answer to that question will almost certainly be yes in every instance, but you get the idea.

If you are interested in a one-on-one demo look me up at one of the trade fairs that I regularly attend for #Pentaho. Keep up to date on when I'll be about by following me on twitter @MarkdMelton.



Tuesday, 29 October 2013

Agile Real Time BI with Pentaho - Part II

Where to go next?

This is part II of a trilogy of posts on using the Pentaho Business Analytics platform to do Business intelligence on real-time sources as rapidly as possible (it took me longer to write the first post I that it did to build the solution!).  If you have found this article from elsewhere I'd suggest reading part I first, which is available here:
In Part I I implemented a simple report as part of a dashboard using a PDI transformation as a data source.  I'd also like to point out that I'm not the first person to do this, nor is this aspect of the solution a new Pentaho feature, this has been around since 2011, as shown here by Wayne, Using a Pentaho Data Integration Data Source with the Pentaho Report Designer.  In part II I'm going to expand this to show how a second detail level drill-down can be created passing parameters into the transformation to control the search.  Part III goes on to use some Pentaho 5.0 features.

Part II starts off by creating a new transformation to retrieve and process the detail level data for one specific line.  After creating a new transformation the transformation properties are modified by adding a parameter.  In this case I've added two, one for the URL of the detail level service URL and one for the line that I'm interested in. 

The first part of the transformation repeats the call to the line status API, but gets back more detail on the service, in parallel with this it fetches the summary level detail for the individual line, this lists information for individual trains and stations on a line.  
Again there are a few steps to trim byte order marks etc.  There is a database look-up step to get the color formatting for the line from the same database.  In addition in this case the second detail level look up requires a line code.  These are published by TfL in their developer guides available here.  I have also  put these into a database.  The query URL is then constructed and passed to a second HTTP request.  This generates a large XML response where I use a Get Data From XML step to break out the values I'm interested in.  In this case I loop on the "/ROOT/S/P/T" element and then use XPaths to get the fields I need inside and outside of the looping element.
Platform and Station are one level and two levels back from the looping element.  This then creates one data row for each train active on the line.   I then join this stream with the line status information that I have from the earlier call.  This is then followed by a slightly odd section...
Filter Ghost stations? I was a little confused when I first looked at the data and found that there were stations appearing on the Bakerloo line and Central line that were listed as having overground services.  A little bit of investigation discovered something a little strange, there are many stations that used to be part of the underground, but for one reason or another are no longer part of the underground network, details are available here.  Why a station that ceased to be part of the underground in 1982 is still appearing in an API query for that line I haven't been able to work out. 

If you are interested in some of the more bizarre aspects of the London Undergrounds missing and closed station there is an excellent website dedicated to this topic underground-history.  
A little bit of further tidying of the data was required for the predicted arrival time, the default for "in the station" is a '-' character, and as I wanted to sort by arrival time this was not much use, so I replace this with "00:00".  The next steps just sort the trains by arrival order and filter this down to just the next three arrivals.

Line Status Report

This dashboard panel will again use a PRD report to display the data.  The complexity in this case, aside from rather more data, is passing a parameter to the transformation. In this case I create a parameter "Line".  In the Pentaho data Integration Data Source popup there is an Edit Parameter button.

This just links the name of the parameter as it is known in the report to its name in the transformation parameters. I then added the data fields to the report, did a bit of manual styling  linked in the line status details, station names etc
To make this report work in a pair with the report created in Part I as part of the dashboard, a minor modification was needed to the first report.  The Pentaho EE dash boarding has a capability to very easily link dashboard widgets using parameters and content linking.  To access this in a PRD report you need to add a hyperlink from the cell that will be the drill-down link.  The link itself will be intercepted, so doesn't actually hit it's end point, but I used:

 So at this point I have two transformations and two reports.  All that remains is to edit the dashboard, drag and drop in the second report, enable content linking from report one and attach this as a parameter to report two.  The refresh can be left on the individual panels or moved to the dashboard level.


So the dashboard at the end of Part II looks like this:
Clicking on the line in the left hand report changes the line listed in the right hand panel. In Part III I'll show a very simple example of how this real time data can be stored in Cassandra and queried using a Mondrian Analyzer view.

Friday, 25 October 2013

Agile Real Time BI with Pentaho - Part I

A change of direction

Back in June I changed direction in my career and moved from being a business architect trying to deliver business benefit at a large media and publishing house to a pre-sales engineer for Pentaho.  I'd used the community edition of Pentaho quite extensively in the past, so now I've moved on to using the Enterprise Edition to show the possibilities of using a modern plug-able business intelligence platform.  

One of the interesting aspects of my role is visiting trade events and shows giving live demonstrations, showing how we can solve problems on the fly with our technology.  While most people are in the talks, the rest of us on the trade stands have an opportunity for some downtime.  This also gives a great opportunity to put together some new technical demos.  Last week I was at the Cassandra Summit Europe 2013 in London and took the opportunity of an hour or so's downtime to put together a small technical demo.

Agile Real-time BI

Trying to come up with a sensible test case for real-time BI is often a significant stumbling block, but in this instance I took inspiration from the work that Matthew Summerville did on a live London underground map.  Transport for London have an API available with various data sources, this is free to register for and access.

TFL Service Status
The choice of technology in this instance is Pentaho Business Analytics, both the BA server and Data Integration.  I'm using the Enterprise Edition in this case, but most of the functionality needed is in the community edition as well.  Most of the data I'm planning on pulling live but where I need data storage I'll use Cassandra.

The objective that I've set myself is to create three report components:
  1. A report similar to the the TFL service status report
  2. A drill in detail report showing the arrivals at each station
  3. An analytic view of real-time and historic data.
In this post I'll cover component one and I'll followup with parts 2 and three.

The starting point for the service status is an outline architecture.  One method of producing a report component with tight control over the visual appearance is to use the Pentaho Report Designer.  Looking at the options for the  real-time data acquisition for this component, one option is to use a PDI transformation as a data source.  So that's where we're going to start.


Real-Time data acquisition in PDI

The TfL API can be accessed by a HTTP client and will return XML.  PDI has a couple of HTTP clients for Web-Services (SOAP), RESTful interfaces and general purpose HTTP clients, in this instance I'll use the HTTP client.  The HTTP client on it's own will not generate any data rows, you have to add a step before the look-up to generate at least one row.  In this case it would be useful to know the query time anyway, so I'll use the Get System Info step to get the current time.

To this gives us:
The get system info step is used to get the system time:
The connection to the Tfl API is quite simple, just connect and get the line status:

One complication at this point is that you might get a BOM (byte order mark) tacked on the front of the response.  On Windows I found this to be 3 characters and on Linux 1.  As a simple work around I used a simple OpenFormula step.  The next part is to extract the interesting fields using the Get XML Data step.
This extracts the Line and Status fields from the XML as separate data fields in the stream.  I also added a database look-up step to retrieve some color formatting data for the report.  TfL publish a style guide where they specify the color scheme of foreground and background for each line.  So the final transform is:

A dashboard report in Report Designer

The desktop report designer tool is an hugely powerful tool for creating visually rich report components.  While the tool is used extensively used for creating inventory reports an invoices destined for print its flexibility lends it to being suitable for a wide range of purposes.  This flexibility starts from the supported data sources.  These range from simple SQL statements, dynamic queries, metadata repository queries, PDI transformations and from version 5.0 MongoDB.   In this instance I'll use the PDI transformation that I just created.  

When creating a new data source in PDI there is an option to use "Pentaho Data Integration", this opens a dialog where you set a query name and specify the path to the transformation file.  This will then read the transformation and list the steps available to use as the source of the data for the report.  It is also possible to add parameters to filter data from the report.
By dragging and dropping the available data fields into the report you can create the layout that you want.  Each component on the report can have all its parameters set statically in the tool or dynamically from a data source.  In this case I'm going to use the foreground color and background color from the query to style the report elements.   

In this case I've set the text color to the value of the data field "foregroundcolor".  All that remains to do now is style the remaining components and preview the report.
That's our finished report, all that remains to do now is add this into a dashboard.  Firstly you need to publish the report to the BA server, once the report is there in the EE edition you can create a new dashboard page and drag and drop the report element into place.  
In the dashboard designer the refresh interval can be set.  In this case I use 60 seconds.  So every 60 seconds the dashboard element is refreshed, where the report is redrawn and the data source queried, in this case our PDI transform bringing real-time data into the dashboard.  

So I now have a dashboard element that updates every 60 seconds querying and displaying real time data.  In my next post I'll look at how this report can be linked to a detail drill-down to show the status of each line.  This whole exercise should have taken no more than an hour even for a PDI and Pentaho reporting novice, you cant get much more agile than that!


Monday, 17 June 2013

When Enterprise Architecture and BI collide

Last week I was in London at the Enterprise Architecture Conference Europe, first off I must say that this was a really enlightening conference, containing both good and bad examples of EA.  I'm a firm believer that those presenters who are brave enough to stand up and say "this just didn't work out" are often more valuable than those who want to represent everything as an unmitigated success.  For a while now I've been working around the idea of a post on how your EA strategy can and should impact   your BI/Big Data/Enterprise Analytics (delete as appropriate) strategy.

For those that have grown up on the Kimball method of building data warehouses, there might seem to be one solution of "we do masses of up front analysis and build a dimensional data warehouse", while this is a gross exageration it sums up many approaches I have seen.  The stock BI strategy has lots of nice, fairly obvious statements of "do away with data silos", "single version of the truth", "driven by business requirements" which are all very nice, but how do you actually go about achieving this? If you have the ideal of a CEO/CFO who is a strong advocate of the value of BI and is trusting enough to leave the specialists to get on with it, then you are truly blessed. But this is not always the case and there are often many other factors that now have to be taken into account.  With the velocity of change now increasing massively, not just in technical terms but business change we need to step back and look at the big picture for a moment.

Taking the view from the top, an enterprise or company is a collection  of people, processes and data working towards a common goal.  Our role as BI professionals is to help the enterprise do this more effectively, by enabling an increase in revenue, reducing costs or avoiding costs.  Enterprise Architecture is also a profession that aligns with this.  If you are not familiar with EA I'd recommend reading the excellent book "Enterprise Architecture as Strategy" by Jeanne W. Ross, Peter Weill and David C. Robertson.

There needs to be a clear differentiation between "Enterprise Architecture" and "IT Architecture for an Enterprise"  I've come across several IT architects who describe themselves as "Enterprise architects" when they only deal in IT.  Enterprise Architecture is the view of the business processes, their standardisation and integration and the underlying systems that support them.  Sometimes what was "Enterprise Architecture" is now being called "Business Architecture". A simple example of the difference is the IT view of "how can I redesign or implement this system to be more efficient?", the EA view is "how can I redesign or implement the processes supported by this system to be more efficient?".  There is no point in building the most perfect and cost effective IT system to support a process that no longer needs to be done.

While I'm on the topic one common theme I've come across is an odd behaviour of a commonly held belief that "IT cannot lead business strategy". While it is true to an extent that just because there "is" a new technology you should not necessarily use it, to be blinkered to new technical opportunities is just as bad.  It's much more helpful to see IT and the business as having a symbiotic relationship, each a separate entity with different drivers and opportunities but ultimately part of the same ecosystem, mutually dependent no the success of the other.

So back to the thread,  whether it is explicitly stated or not your Enterprise has an architecture, in my experience it mostly got there by accident rather than design.    The MIT Sloan Centre for information research  demonstrate 4 different operating models depending on the level of business process integration and business process standardisation.

By default most businesses subconsciously fall into the Diversification category, but this should not necessarily be seen as a bad thing, sometimes this is the most effective and cost efficient model for operating in certain markets.  Let me give my paraphrasing for each category, both the "formal" and reality of whats happening on the ground:
  • Diversification:  
    • Formally - Each business unit has the flexibility to implement its own services and products as it sees fit, there may be some shared services but generally each unit has the flexibility to define, build and run its own services and processes as required.  
    • The reality - Its chaos out there.  Lots of duplication of systems, different processes for performing the same action, multiple business definitions for the same entity. Rapid pace of uncontrolled change.  You will probably also come across business units implementing their own IT systems. 
  •  Coordination:
    • Formally - A high degree of data integration, prebuilt delivery channels and standardised technology.  Each business unit still has a degree flexibility variation of the processes it uses on top of the standard services.
    • The reality -  Lots of effort to define and enforce the standards.  The body responsible for enforcing the standards often seen as an obstacle to progress leading to a tendency to move back down to the diversification model.
  •  Replication:
    • Formally - Pretty much a franchise model, standard branding and processes reused repeatedly.  High efficiency from reduced risk of implementing new untried processes.  Local data ownership but enterprise wide definitions as part of the process.
    •  The reality - replication of data systems and small local variations to standards.  No clear view of the customer gives a risk of competing against yourself for business.  Continual risk that the processes may be seen a limiting factor and an obstacle to progress leading to a tendency to move back to a diversification model.
  • Unification:
    • Formally - Highly organised cost effective model, highly effective at identifying cross-sell and up-sell opportunities.
    • The reality - "You will be assimilated", but implementing a global enterprise BI system is easier to achieve in this model than any other.  Generally to have implemented this model means that the architecture and standards governance in the Enterprise must be top notch.
Before I go on to look at the impact that each of these operating models has on a BI strategy there is another important influencing factor.  Where in your organisation do your enterprise architects sit?  Obviously by this I don't mean the nice seats in the corner with the good windows and pot plants, but whats their reporting line?  Generally if they are within the IT organisation their effectiveness will be considerably reduced.  The tendency will be for the business to see them as "IT" so "what do they know about business" .  This was a very common theme at the EAC conference last week.   The risk is that while you may have "IT Architects" who do their best, the business is doing it's own Enterprise Architecture to their own tune.

How does this affect our BI strategy?  Well even if the EAs have a clear vision of the future if they don't have the authority to rigorously enforce this or they are seen as living in an ivory tower the reality is that over time new processes and capabilities will appear that bypass the central organisation.  Essentially your enterprise has reverted to the diversification model.  This becomes your problem when you try to build a BI capability for one model but the reality proves to be very different.  The advice here is form a good relationship with the EAs (you may even be in that group) but also look further afield to see the bigger picture.  Also remember that if they are doing their job well EAs will be having a tough time, they are often a voice of sanity in a enterprise trying to produce change for a better future, keeping that better future in mind is not an easy task when dealing with the day to day politics, especially as your actions may well render parts of the business redundant.  This was best summed up in a presentation last week by "When you are up to your arse in alligators, just remember you are there to drain the swamp".

So getting to the point, how should your BI strategy be influenced by the companies operating model? Lets start with the easy option, your enterprise is following a unification model, firstly, lucky you, someone else has already done most of the hard work of data integration and standardisation. In this model you can pretty much go to the shelf pick off a standard BI strategy of a single enterprise warehouse and stand a good chance of being successful.   All change should be planned and coordinated, where you just need to ensure you are plugged in at the right point.  Your technical implementation can also be considerably simplified by the reduced requirements for data integration and standardisation.  These are the models that the vendors love to tout as examples, being mostly successful and at the lower end of the cost range.

Now onto the slightly more problematic models, firstly replication.  The primary problem here is data integration and standardisation of entities not covered by the replicated processes. But the first question to ask here is do you need the "whole" view of the enterprise.  While the single view of the enterprise might be a nice to have and certainly required from the very top of the business, this may not even be needed at the lower levels.  Think of it as letting your BI strategy model follow that of the business.  Do you really need to go to the level of standardising elements such as customer address right across the enterprise?  As each business unit operates independently with it's own customer base what value are you adding by doing this?   So even if you build a single conceptual warehouse you can probably have dedicated model for each unit and only aggregate data at a level where it makes sense to do so.  While there is more effort involved in maintaining essentially separate solutions this may well be more successful that trying to force the business to change just to fit the niceties of your warehouse architecture.

Another slightly less problematic model is coordination.  Provided that everyone is sticking to the rules again most of the hard work for data standardisation and integration will have been done.  The problem here is how do we tailor reporting to suit local process variation?  The difficulty here is firstly there may be hidden data sources that support the localisation, these could even be the dreaded multi-thousand line spreadsheets, secondly you may well need to have a different view on the same data where local process practise adds a variation to the standard definition.  The latter problem can in part be resolved by moving away from the standard Kimball dimensional model to the more mature models using "Foundation layers" or similar techniques to abstract the dimensional model from the main data storage location.  Here each business unit can have it's own local dimensional model tailored to suit its local process variation.

Finally we reach diversification, this is really the wild west of BI and data warehousing, again you really need to ask the question "Do we need an Enterprise wide view?"  if the answer is yes the solution will be challenging not just from a standards and data perspective but technically as well with the associated costs of such complexity.  But the biggest hurdle will be getting business sponsorship for the project, your estimates will be put alongside example implementations using Qlikview in enterprises using a unification model with the inevitable questions about the extra cost.  Your first hurdle here is going to be getting the funding to just do the work to establish the scale of the problem and produce a governance and technical plan.  The only advice there is you have to do your best to present the vision of the possibilities and attempt to get a good executive sponsor who appreciates the scale of the problem.

While a standard approach here might be to start with the Kimball method of detailed requirements gathering followed by building a dimensional warehouse, this is going to be hugely costly and probably outpaced by the rate of change in the business.  So my best advice here is simple, start with a narrow scope, just a single process or area, but in all its diverse forms.  Use as much data abstraction as possible, expect change to your source systems.  Do just in time requirements gathering for your analytical layer, again requirements will change there is no point in documenting something in great detail only for it to be obsolete by the time you finish writing it.  A good approach to this problem is presented by Oracle in their paper "Information Management and Big Data,  Reference Architecture".  This is not an Oracle specific architecture and could be applied using a range of solutions. Deloitte present a very similar model in their paper on "How to build a successful BI strategy".  The most important role you are going to need in your team here is a top notch data modeller.  The abstract data model that constitutes the foundation layer or Enterprise Data Warehouse as Deloitte refer to it is a critical success factor.  The key here is to be able to focus on the similarities between data sources and processes, not the differences, while the detail and the differences are important it is all too easy to get bogged down in the detail and suffer from analytical paralysis.  Just make sure your data modeller is a "glass half full" personality.  The other key role will be your business analyst, they again need to focus at least initially on the "big picture" of what types of questions does the system need to answer, rather that diving into the detail of the actual questions or worse still report layouts and columns.

So in summary perhaps one of the questions you should ask before taking up a BI architecture role should be "What is your enterprise architecture model and governance process?", if they cannot answer or there is no enterprise model, be vary wary and look carefully at what you are taking on, you might be taking on the role of chief alligator keeper.