45-A
Integrating the Reserve Distribution Process into Enterprise Risk Management

Tuesday, April 1, 2014: 8:30 a.m.
Maryland Suite C (Washington Marriott Wardman Park)
Never has it been more important for actuaries to improve their understanding of reserve variability.  Updated International Financial Reporting Standards (IFRS Phase II) will likely require all insurance companies to record an independently measured and updated risk margin.  In Europe, Solvency II directives already require the recognition of a risk margin and validation standards require the Actuarial Function to comment on material deviations from prior expectations.

Back-testing enables the reserving actuary to assess the “new” information inherent in the loss triangles, relative to “known” information and future expectations inherent in the analysis. Without an analysis of reserve variability, an assessment of the significance of deviations from expectations is not possible. Even with an analysis of reserve variability, distinguishing between mean estimation error, variance estimation error, and random error is difficult. A systematic back-testing process as part of a comprehensive ERM system, which uses the output of prior reserve variability analyses, significantly increases the ability of the actuary to assess deviations from expectations and provides management with an early indicator of performance relative to the actuary’s expectations.

Within the comprehensive ERM solution, assumption consistency becomes an important challenge. When selecting a point estimate for an unpaid loss reserve, the practicing actuary commonly weights the results from multiple methods. By assigning weight to a method, the actuary is partially accepting or rejecting the assumptions inherent in each method that contributes to the selection. Therefore the future expectations for each data element (e.g. incremental paid losses) are a weighted average of expected data element in each of the methods which received weight. Likewise, the inherent uncertainty in the selected estimate is more appropriately modeled as a weighted average of the expected uncertainty in the methodology which underlies each model used to estimate uncertainty. An approach which uses a single model (e.g. Mack) to estimate the uncertainty around a point estimate, based on multiple models, uses an assumption set which was at least partially rejected during the selection of the point estimate.

This session will examine a framework for reserve distribution testing and validation and demonstrate its use with real datasets within an Enterprise Risk Management framework.  We will also discuss the impact that various scenarios of one-year development may have on next year's estimate of reserve variability.

Presentation 1
Jeffrey Courchene, Principal, Senior Consultant, Milliman
Handouts
  • Integrating Reserve Variabilty and ERM.pdf (818.5 kB)