Data Integrity. The bottom line.

Investment in MI bears fruit only when certain prerequisites are met. This series of blogs will take a brief look at each one of these essential requirements in turn. Organisations are familiar with each of the concepts I’ll discuss, but are they taking them seriously enough?

Too often not.

The very essence of MI should be about data integrity.  No amount of investment, expertise or technology can change the fact that poor quality data compromises entire MI landscapes.  Within too many organisations data integrity is lucky to get a proper look in.  But what do I mean by data integrity?  Well it’s more than just quality, it’s about having one conformed source for your master data, about end-to-end visibility of your data, about having the rules & technology in place to dynamically address poor quality data that enters your landscape and it’s about conducting deep reconciliation of your data before releasing it.  The ambition should be no less than having complete faith in every aspect of your data

In order to achieve this, you need to take a strategic look at Data Integrity, it should never be an afterthought, or a bolt on to a project.  It must be at the very core of every MI solution.  But before you can embark upon assuring the quality of your data you must first truly understand it.  This means building a data dictionary that provides clear traceability of all the content within your landscape from end-to-end.  It doesn’t stop there though, the dictionary must be kept in sync with reality or it will soon be compromised.

With that in place, Master Data comes next.  We all know that customers, for example, often originate from different systems within your business and that isn’t going to change.  But your information consumers crave one view of your customers, not several.  Effective Master Data management can enable this single, conformed view of your organisation.  Don’t be afraid to throw time and resource at achieving this, it will more than pay for itself.

Then we have the traditional arena of data quality, dealing with data that dubious data that comes from all corners of your organisation.  Fortunately, nowadays there are plenty of excellent tools out there that can help you to dynamically treat a wide range of data quality issues such as data types, formatting, and missing values.  These can be easily addressed without the need to constantly bombard those that provide the data with exception reports that often don’t get acted upon.  But and this is a big but, technology is not the key to success here, as always it’s simply an enabler, they key is in thoughtful design of your data quality routines and insuring that they evolve in line with the issues you face.

Last but by no means least, is data reconciliation. Time and time again organisations put too little emphasis on this, whether it be a migration or a new solution driven from requirements up, without deep reconciliation you simply cannot be assured of the Data Integrity.  So what’s deep reconciliation?  It’s about saying goodbye to reconciliation by counts, sums and tolerances, instead performing row by row, column by column comparison of source and target data. Only by engaging in this level of reconciliation can you truly be assured of Data Integrity.  Sounds like an onerous task right?  Well it certainly used to be but now there are plenty of tools out there that can make this essential task both efficient and effective.

So, the bottom line, next time you start out designing or reviewing an MI solution don’t let Data Integrity slip down to the bottom of the list like it does so often.  Put it right up there at the top where it belongs.

 

 

chat to top