By TopDown Team
June 13th, 2016
Recently, we’ve implemented FDMEE version 184.108.40.206 at a major client. This client has been using FDM classic version 11.1.2.x for many years and needed to upgrade to version 220.127.116.11 for compliance and maintenance support reasons. When we started this project, we installed the base version of FDMEE 18.104.22.168 with the .100 patch. Through the course of our implementation we ran into some noteworthy findings that we’ll discuss in this blog post. Our guideline for the FDMEE build was to use built-in functionality and be fairly vanilla, so avoid any custom coding.
Our client has their primary general ledger on Oracle E-Business Suite (version 12). So we utilized the FDMEE adapter for Oracle EBS. In FDMEE, we created one source system connection to Oracle EBS. We have two HFM applications (a primary Corporate consolidation application and a subsidiary application) that are in use and both HFM applications will use FDMEE to pull data from the EBS G/L. This meant that we created two target applications in FDMEE. The primary Corporate consolidation application has de-centralized data loading by the business units so there are over 50 different FDMEE data load locations that will use the source connection to Oracle EBS. Each of these 50 different FDMEE data load locations will pull data for at least one company (legal entity) from the G/L and the biggest location would pull about 40 companies. The subsidiary HFM application is centralized and has a single FDMEE data load location that pulls data from the EBS G/L. The subsidiary’s FDMEE data load location will pull data for about 150 companies. When we moved into the system integration testing phase of our project, we started to test simultaneous/concurrent data loading. We would have multiple FDMEE data load locations executing the Import/Validate/Export steps at the same time. Immediately we saw Import errors in FDMEE. The Import step would only complete for the first location, while the other remaining locations would fail. We went to Process Details and opened up the log file for the locations and saw these type of messages “RuntimeError: The data rule ‘xyz’ with Process ID ‘123’ must complete before extracting data from the same ledger…” We opened up an SR with Oracle Support and long story short, it was confirmed by Oracle Support that this is the intended behavior of the Oracle EBS adapter. It does not allow for simultaneous data loading by multiple locations. Oracle Support indicated that there is an enhancement request already recorded to correct for this intended behavior. Of course we added our client to the enhancement request list. In the interim, we had to apply a workaround for our 50+ FDMEE data load locations:
- Create additional source system connections in FDMEE using the Oracle EBS adapter. They would all be essentially the same. So we went from one source system connection to over 10.
- Then rebuild the FDMEE data load locations so that a smaller group of them share the same source system connection. We did not completely eliminate the chance of the RuntimeError happening but we would meaningfully reduce those possible occurrences.
Next, in the course of our testing on the .100 patch, we found that the Snapshot Import command was not working as intended. A data load location would start to import data by clicking on Import in FDMEE and then choosing the import mode Snapshot. FDMEE would pull data from the EBS ledger and then it would validate and finally the end user would Export data into the HFM application. We tested data on historical months. We saw differences in data balances and it did not make sense because the historical period was closed so there was no timing issue. So on a hunch, we ran the Full Refesh import mode, which took about 1 hour to complete at this client. We compared data balances this time and they matched. So we found that we needed to apply the .200 patch to resolve this problem with the Snapshot mode.
Our client also uses the FDM journals functionality and has a business process to complete the FDM Excel journal template and upload the data using FDM into the HFM application. Every location will have the requirement to be able to upload the FDM journal template. What we discovered in FDMEE 22.214.171.124 is that the data load locations that use the Oracle EBS source system do not have the Load Journal button available. It does not show up in the data load workbench.
Image below, what we were expecting to see with the Load Journal button
Image below, what actually happens with Oracle EBS source system locations – no Load Journal button
And of course we checked if the .200 patch would add back the missing Load Journal button, but it did not. Oracle has not yet addressed this issue. In the interim, we had to apply a workaround:
- Create new FDMEE data load locations for the purpose of loading FDM Excel journal templates. These new locations are setup with an import format that uses the flat file setting.
- Then we use the Parent mapping property in the locations so the new journals-only data load locations will share the same mapping tables as the main EBS G/L data load locations.
Lastly, we found that uploading mass changes to the mapping tables could allow duplicate source mappings to exist. This occurred when our FDMEE environment was on the .100 patch. We would have a delimited flat file that contains the complete mappings for one dimension. We would upload this delimited flat file into the FDMEE location. There would be user errors where the same source system code was mapped more than once, for example source account “ABC12” would map to target “Cash” in one row and then in another row source account “ABC12” would map to target “Prepaids”. Also the same thing could happen using a Script in the Like mappings. The FDMEE mapping upload did not catch the duplicate source mappings. And then using the Validate button in the mapping table did not identify any duplicate source mappings either. Then when the user worked in the workbench to Import & Validate data, the Validate step would fail. It would show that all data rows for the Account dimension, Intercompany dimension, and Custom dimension did not map so the target columns would be blank, even though there are mappings for all of those dimensions. Go to Process Details and look at the log file and it will show an error message such as this: “RuntimeError: [Error: Data rows with unmapped dimensions exist for Period xxxx].” The resolution was to go back and cleanup the duplicate source mappings. Then we applied the .200 patch, which did a better job of catching duplicate source mappings during the flat file upload process.
This blog post is co-authored by Dwight D. Davis, Solution Architect