I recently learned that you should delete any unused columns and row in template studio to help your reports render quicker. Does anyone know of any other helpful tips to improve performance?
At our company, we have quite a bit of GLs/entities and sometimes, when you are pulling all info, it takes a while to load. If you choose the exclude zero’s option, that reduces the time (not the most unique advice haha, but just something I went through!)
I consultant told us that option. It does reduce processing time.
Good to know! Thanks We have lots of entities too so that slows down performance when running several entities and all gl data. I’ll have to try your tip. Thanks again
I find that it’s helpful to limit the dimension members to those you’re focused on. Even if a rollup will give you exactly the same value, the cost of rolling it up is real.
My natural tendency is to build reports or questions at the highest levels to make sure I don’t miss anything. And this is perfect for ad hoc reporting or for figuring out how to design a new report. But once I’m ready to make it official, slimming the member selection down to the most focused part of the hierarchy helps speed things up.
Hi Crystal,
There are many things you can do to speeden up your templates for sure. It comes down to what the template is trying to achieve. A few common ones are:
You can use the ‘treat zeroes as missing data’ to hide even all zero values / rows
If you’re ending up hiding more columns / rows that what you need to see on the screen, perhaps it may be a good idea to use unique-combinations
When necessary, template calculations might be better off (for instance to calculate Net Income, the top most level) than pulling the info from the cube, depending on how complex the report is
Hope this helps,
Navin
Thanks for sharing Bob!
Thanks Navin! I appreciate your response.
In Template Options (via Design mode), you can toggle the “Auto Refresh” option OFF. Thus when performing Data Entry upon the Template, especially if you are performing many spreads or changing Line Item Schedule data, then you can manually refresh the calculated results at the end, rather than with each cell value change.
Unl as absolutely necessary, I keep all of my dimensions at the highest level. Most of the time, that’s sufficient for the end users!
I’ve been caught with having too many dimensions in unique-combinations combined with hide zero rows and null as zeros. For example, having Time-Perspective-Version-Entity as a unique combination.
It may not actually make a difference - but for any “reporting” templates - we like to turn the analysis to read-only against the cube. Feels like it makes a difference.
I turned “Auto Refresh” button OFF to make it faster, which reminds me when working on larger Excel files with a lot formulas, I usually turn off the “Auto Calculations”, so it won’t slow down.
We have limited the number of Data Views in one report to no more than 3 or 4. I’m not sure if that is best practice or not or whether it makes a difference. Should it been 2 or less? What is everyone’s thoughts?
I also vacillate between pulling in the Data View in a section off to the side of the report and then building out the report with formula links to all the data and building the formatting of the report right there in the data view. I know the second approach is probably the better / best practice approach but I find that when building some reports the first approach works better because I can change the report parameters and it doesn’t mess up all my formatting with a new data view pulling in. I’m quite inexperienced at using the system so any advice you guys have is appreciated.
In the Data View Properties, you can select map row axis member selection change or map column axis member selection change to help retain formatting when you modify the data view. Also, format painter is helpful.
Ben, I find myself doing a variation of that. I have a lot of times where i need to display data in some way not inherently natural to prophix, such as displaying IDs and Names for leaf nodes and names only for non-leaf nodes. So i build a template formula to display the data in an added column within the data view area.
There are times when cloning a portion of a data view helps, but sometimes not. And cloning a data view feels like it would slow down the system, though I suspect that’s not really true. I expect that cloning merely creates a second pointer to the same result set rather than refetching the data.
Ben, I’d say the best practice is to keep the data view to the least number you need to get the report you need. Any fewer and your report doesn’t work (which i consider to be a critical requirement).
Every view does require time to load, then time to link to the other views (assuming they are linked). But how much time it takes to load is really more dependent upon the structure of the data view itself. So my tendency is to focus on making sure the view beings back only the data critical for the purpose.
Sometimes this means i make simpler templates with links to other templates for auxiliary but optional views. It also means I have a number of templates that are very similar to each other, which bugs me because if i change one i have to figure out which bothers need to also be changed for consistency sake, but thats the trade-off for performance for end users.
Thanks very much Crystal and Bob. That is very helpful and very kind of you to share your knowledge.
Thank you all for sharing your insights, they were all interesting to read. Now to put them into practice!