Updating HFM Application Server Keys on a Large HFM Application


February 11th, 2014

Recently I had to update the HFM application server keys on a big HFM application running on version, running on a single dedicated Win2008 server (64 bit) CPU 2 X 8 way 3.0 GHz 32GB RAM. The backend is Oracle 11g. End user loads are moderate—the typical concurrent number of users topped out at around 10. The HFM application dimensionality was in the medium range:

2 scenarios

400+ unique entities

35 currencies

4700+ unique accounts

6000+ unique custom1

33000+ unique custom2

We were working with fairly straightforward business rules without too many complex calculations that are being performed.

The data was coming in through an SAP G/L data load. Data density was high, and in order for FDM to process it, we had to divide the data load into two files. In fact, the SAP data files were so large that the FDM batch loader was the only way to load them; the manual user-driven load would time-out at import.

How I updated the keys:

To get started, I read through the Oracle HFM tuning guide (Dec 2013 updated) and made some modifications that it suggested.

(1) Memory settings modifications:

MinDataCacheSizeInMB set to 2000

MaxDataCacheSizeInMB set to 4500

MaxNumDataRecordsInRAM set to 30000000

(2) Thread settings modification:

NumConsolidationThreads set to 8

All four registry settings were added as new DWORD decimal values at this location — HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\Hyperion Financial Management\Server.

Additionally, since an Oracle provider OLE DB was also a part of the setup, we made a modification that Oracle suggested for Statement Caching to mitigate memory leak issues (Oracle Doc ID 761418.1).

Next I performed several rounds of testing by running consolidations in the HFM app and saw about 1/4 to 1/3 time improvement. For example, consolidation all with data on one primary entity hierarchy for one period: before 20-21 minutes, after 13-15 minutes.  After the registry memory settings update, you will see the the HsvDataSource process on the server consume more memory as you execute tasks in the HFM app that calls up new subcubes.  When it reaches the maximum memory threshold and you continue to execute subcube tasks in the HFM app, it will release records to free up memory for new subcubes to load.  In System Messages or on the HFM event log, you will see the “FreeLRUCachesIfMoreRAMIsNeeded” messages.  I did not see those messages often after the modifications.  According to the Oracle tuning doc, I could ramp up the memory settings even more since the server has 32GB RAM but I’ve left it at these levels because the testing was done on the DEV box (less resources). So perhaps we might go back and push it up if the end users give us feedback.

On opening data grids, web forms, and running Financial Reporting reports, what we’ve seen with the new code base on 11.1.2.x is that it’s slightly slower. The modified memory settings do help speed it up a bit after enough subcubes have been loaded into memory.  Nevertheless, the performance on loading data grids in 11.1.2.x is somewhat slower compared to, possibly due to the change in code.

Our objective going into this process was to maximize the benefits of moving onto 64-bit platform with plenty of memory resources.  I wanted to realize faster consolidation processing time and the testing results have shown that we’ve achieved a significant improvement.  Note that your results may not match up to what I’ve described and it will vary due to factors such as the application design, HFM rules design, and hardware setup.


About TopDown Team

The TopDown Team includes members of TopDown Consulting who want to let the community know about webcasts, conferences, and other events. The team also conducts interviews on various EPM industry topics.

Leave a Reply

Your email address will not be published. Required fields are marked *