Data Validations Leading Practices

By on August 9th, 2017

Over my 15 plus years of consulting, one of the great constants in working with Oracle Hyperion EPM is that you cannot hide from data validations.  It must be done.  One of the eternal questions is:  “Does the data tie out?”  And of course, we know that the answer must always be an affirmative.  In this blog post, we’ll discuss the leading practices that I recommend you follow to be successful with data validations.

First, there must be a plan.  Data validations are typically considered a project within a project.  With any successful project, you should have a plan to show what task(s) are to be accomplished, who needs to be involved, and when it should be started & completed.  The plan should be developed and agreed to before you begin to load your historical data into the implemented application.  Here are items that make for a successful framework of your data validation plan:

  • Define the scope of the data validation – what are the fiscal years and periods to validate? When should the data validation process be completed within the overall project timeline?
  • Identify the source(s) of the original data – what do we validate data against? Typically, we validate against the source system(s)/application(s) from where we are extracting the historical data.  Obviously, ensure that the original data is static.  We typically recommend that the original data is locked down to read-only status if it has not already been done.  Sometimes it might be a good idea to take an application copy of the original data, for example, if the source application is a Hyperion Enterprise application that is being migrated to a new Oracle Financial Management application.
  • Determine the quality of the source data and set your acceptable materiality threshold – how clean is the source data? What is management’s tolerance for potential data reconciliation differences?  It is good practice to outline the problem resolution path when a material data difference is identified – who should the data difference be reviewed with and who approves the resolution plan to fix it?
  • Assign tasks to the key personnel involved and ensure that data validation tasks are high-priority status – who is responsible to validate the data and who signs-off? Communicate the data validation work to be completed and the timeline target.  Ensure that access to the source system(s)/application(s) and the target application will be active and working for all personnel involved.
  • Make the data validation process be repeatable because it is often the case that you must validate data again after an unexpected change is made. More about this topic below.

Next, let’s dive into data granularity.  This is the tedious part of the data validation process.  To be successful, you cannot avoid it.  Here is what I’ve found to be successful, at minimum:  (1) validate at the top consolidated level for your financial statements such as the Balance Sheet and Income Statement; (2) validate at the trial balance level for key base level entities in both the local currency and the translated reporting currency.  For some clients in my experience, we have validated at all base level entities in step #2 for the most stringent data validation requirement.  Additionally, validate data for other dimensions besides the entity and chart of accounts.  For example, there could be a cost center dimension and/or a geography/location dimension.  They should be validated too using a combination of steps #1 and #2.  Now related to data granularity is the question of fiscal years and periods that hopefully you’ve defined in advance.  Historical data reporting requirements specific to your organization will help answer how frequently you should validate.  Typically, I recommend validating all periods of historical data as the timeline permits.  If you have 7 full fiscal years (standard 12 months each) of historical data, that means there are 84 periods to validate and if the project timeline is very tight, validating data for 84 months might not be practical, so in that case, you could scale back to validating data on the quarters only.

Now, let’s circle back on the bullet point about having a repeatable data validation process.  This is very important because it happens very often that, during an implementation project, there will be a change made that requires you to reload historical data again.  This could be driven by a change in the project scope.  To have a repeatable data validation process, it often helps to embrace the great tools of Oracle Hyperion EPM.  The key ones are (1) FDM/FDMEE, (2) SmartView, (3) Financial Reporting, and (4) the HFM journals module.  Briefly, we’ll explain how they help:

  • FDM/FDMEE is the data integration tool to transform and load data into Oracle Hyperion EPM. It facilitates the conversion of historical data and keeps an archive of it.  You can maintain mapping tables to drive how to transform data from the source to the target.  You can do bulk data loading to load multiple data files and/or multiple periods of data.  It helps you be consistent in loading data files.  Use FDM/FDMEE as often as possible to load your historical data even if there is a minimum amount of conversion that takes place.  It’s ability to archive your source data files also means you can get back to them quickly and you can use FDM/FDMEE for drilling into the source data.  Additionally, FDM/FDMEE has an optional feature to load data files as journals into HFM if that is your target application.  This feature might be useful when you need to adjust historical data with journal entries.
  • SmartView is the Microsoft Office add-in tool to analyze data. On almost all Oracle Hyperion EPM projects, we develop data validation spreadsheets using SmartView.  You can use either the SmartView Ad-hoc or Functions feature.  SmartView Ad-hoc tends to be the more robust and have less file size storage impact.  Once you develop a data validation spreadsheet in Excel, it can become a template and you can copy and paste and then modify quickly in Excel.  With implementations where we work with an older version of Hyperion software and the current Oracle EPM, Excel allows you to pull data using SmartView in one spreadsheet tab from the target application and you can have another spreadsheet tab that connects to a legacy Hyperion software (for example Hyperion Enterprise Retrieve).  In Excel, you can easily have a spreadsheet to compute the differences between data pulling from the source and the target.
  • Financial Reporting (FR) is the tool to create and run canned reports. For some data validation scenarios, it might make sense to create canned reports that data validators can use to analyze data when there are differences.  You can use the expansion feature in (FR) to let you drill down on dimensions to their lowest level.
  • For clients where HFM is the target application, the journals module could be used in the data validation process. I would recommend using it to capture and store adjustments to historical data.  Sometimes during the data validation process, it is tempting to directly adjust a source data file or the source application or create a separate data file to load.  It can be a quick fix but it can lead to lots of confusion down the road about the quality of the source data.  Instead, keep adjustments to historical data unique as HFM journals so they can be seen separately and you can undo them if necessary.  It is easier to un-post a journal in HFM than to manually track down an adjustment made outside of HFM.  It goes without saying that journals are better for auditing purposes.

In conclusion, if you apply the leading practice ideas from this blog, you should be able to give the most rewarding three-word answer when someone asks if your data ties out – “Yes, it does.”

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

© Copyright 2017 TopDown Consulting. All Rights Reserved.