Changing the OpsMgr Data Warehouse retention periods & Using reports to assess impacts to Data Warehouse sizing

One of the requirements for an environment I am working with was to be able to provide performance information for more than a two year period of time. By default, the grooming interval for performance data in Operations Manager is 400 days. To change for this requirement we needed to change the grooming interval on the performance data. In our environment, we have really come to appreciate a lot of the new reports including the “Data Warehouse Properties Report” which is shown below with default data retention periods.

DW01

[TANGENT STARTS HERE: When we started with Operations Manager there were no methods available to estimate data warehouse sizing so seeing this report really makes my day. This process has gone light-years past the types of calculations we were building in Excel based upon estimated impacts of rows of different types of data into the Data Warehouse and then from a formula trying to determine what database sizes would look like a year after deployment, but I digress… TANGENT ENDS HERE]

Note that the report shows the estimated maximum sizing for the database at just over 100 gb in the report before we made the change. We changed the data retention for the performance (hourly and daily) using the steps detailed within Kevin Holman’s blog entry available at http://blogs.technet.com/b/kevinholman/archive/2010/01/05/understanding-and-modifying-data-warehouse-retention-and-grooming.aspx. After making the change, the estimated Data Warehouse database size increased to just over 150 gb.

DW04

Summary: Check out the new Data Warehouse Properties report and if you want to determine projected database sizes after making data retentions changes you should really check out this report (before and after making the changes)!

0 thoughts on “Changing the OpsMgr Data Warehouse retention periods & Using reports to assess impacts to Data Warehouse sizing

  1. LenHavron

    For some ason I do not see “Estimated Maximum Size” or “Estimated average daily size growth” . I am on R2 and have the latest reports – also have CU2 installed. Any ideas wh these two ar missing?

  2. Cameron Fuller Post author

    That’s an excellent question which I wish I knew the answer to as well. I have three environments available to me – my QA and Production environments have this information but my lab environment does not. All of them are R2 with CU2 installed as well. Does this happen to be a lab environment (where it’s not on all the time) as I’m wondering if there is some sort of a scheduled job that has to happen to make this information available…

  3. Marty List

    Great info, thanks for sharing! How much do you trust the estimated maximum size it suggests? The average daily estimate works out to 190GB a year (534.9 * 365) / 1024. The daily and maximum numbers aren’t close in my environment either.

  4. LenHavron

    Cameron, This is a production environment… Although I do have to say the traffic is low comparatively since we are only monitoring about 45 servers — mainly doing application monitoring — but heavy in performance collection. It is challenging to keep up with this stuff every day but yur BLOG helps quite a bit — keep up the good work .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.