A popular product known as insurance-有一个受欢迎的产品名叫保险
导读:保险本质上是通过补偿金保护被保险人的各种风险因素。这篇essay主要就保险受大众欢迎而进行探讨,该essay由英国dissertation网、代写留学生essay频道Finance Essay频道整理推荐。
Introduction
Insurance, a technical term used to describe a financial product that essentially protects the insured from various risk factors through compensation payments. A question of interest then arises “Why would providers of insurance offer such an obscure product that can hardly be seen as profitable from a layman's perspective?” The solution is simple, yet logical. Providers of insurance need to create a sufficient provision that will be able to strengthen the foundation of their business. Until now, continuous efforts have been made by actuaries to develop models that measure uncertainties in insurance payouts, more technically known as claims. As a result, actuarial models that incorporate future variability are being used profusely.
Actuarial models are now widely used to investigate different types of problem insurance companies might incur. Taking into consideration that actuarial models should be of practical use, they need to be consistent, realistic, accurate, and results based. Actuarial models are usually classified into deterministic models and stochastic models. The former correspond to a traditional approach of solving long term financial implications because of its simplistic nature. They look at “best estimates” for the underlying parameters and generate the most probable outcome. However, they ignore the probabilistic approach of occurrences and thus, not a very suitable model. The latter represent the opposite of deterministic models, random variations in variables are allowed for. This creates an opportunity to model real-world events.
Of late, adoption of stochastic modelling techniques has increased rapidly due to a gradual shift from using deterministic models. Stochastic models are basically instruments to work out the likelihood of undesirable occurrences after performing a list of operations, allowing for a random element and time element. Generally, they are used to attach probability distributions for various cash flows and capital instruments. The basic history of a stochastic model derives from random walks. The expansion of random walks into whole new concepts such as time-homogenous and time-inhomogeneous Markov models and compound Poisson models has led to continuously growing research on stochastic models. (For in-depth theories of stochastic models, please refer to CT4 or previously known as Core Reading 2000: Subject 103 from the Institute of Actuaries).
To highlight a clear relationship between insurance and stochastic modelling, the concept of insurance being a type of risk management used to hedge against the possibility of loss needs to be understood. In insurance, the term risk pooling is used to classify clients into different cohorts of risk. That is, clients agree to bear losses in equal amounts, each paying the average loss. By forming a pooling arrangement, businesses can diversify their risk. (See Harrington & Niehaus (2003) pg 54-74, regarding pooling arrangements and diversification of risk)#p#分页标题#e#
To account for these cash outflows, known as insurance claims, a premium arrangement is made as a source of revenue. In the short run, premium charged should be proportional to severity/sizes of claims. In doing so, the insurer can be reasonably confident for the business to be lucrative in the long run. However, insurers are often faced with conflicting interests whereby charging adequately will undermine the profit of the business, but overcharging decreases demand for the product. The key to determining the right amount depends entirely on how much the policyholder is expecting to lose and the common level of risk-aversion.
Given the random nature of these factors, stochastic models are produced to achieve an accepted level of solvency for the insurer, where premiums less payouts are, in very broad terms, positive. A widely used approach is to use net premiums written over claims. The usage of net premiums written has more relativity to the sizes of claims. A broadly accepted solvency ratio in India, according to Pradeep (2004), is around 1.5, where the additional 50% acts as a cushion for market uncertainties (for example: market crashes). (See Harrington & Niehaus (2003) pg 115-133, regarding Solvency Ratings and Regulations)
Hence, the main objectives of Stochastic modelling in Insurance lie in optimal resources allocation, technical reserving (provision), asset and economic modelling, and product pricing.
This dissertation focuses on several stochastic claims reserving approaches used in general insurance. Similarly, pricing premiums in stochastic environments will also be introduced. In conjunction, a highly-sought approach known as computer simulations has been of much popularity in recent years. This approach involves producing approximate numerical solutions when problems are intractable analytically. Furthermore, the impending effects of the new insurance regime, Solvency II, on stochastic modelling techniques will be discussed. Finally, suggestions regarding improvements to models will also be provided.
(For comprehensive literature on distribution of claims, premium evaluations, reinsurance and ruin theory, please see Beard et al.(1984) and Rolski et al. (1998))
The structure of this dissertation is as follows:(英国dissertation网http://www.ukthesis.org/)
The main aim of any insurance company is to generate profits. Two main factors that affect firms' profits are claims and premiums. Thus, Chapter 2 establishes methods general insurance companies used to compute claims reserve. Subsequently, Chapter 3 provides several premium pricing methods used in general insurance.
Choosing suitable claims reserving policy and premium pricing model can enable the insurance company to maximise its profitability. However, explicit formulae may not always be of much help. Thus, Chapter 4 covers implementation of models using computer simulations.
Despite adoption of sophisticated models, some of these are becoming obsolete across the European Union (EU) nation due to implementation of Solvency II. Chapter 5 reviews the list of impacts that has affected insurance companies across the EU nation.#p#分页标题#e#
In addition, Chapter 6 provides several suggestions that can be used to improve current models.
Finally, the conclusion of this dissertation is described in Chapter 7.
General Insurance - Claims Reserving
Overview
General Insurance or commonly referred as non-life insurance comprises the followings:
i) Property Insurance- covering damage to property
ii) Motor Vehicle/Transportation Insurance- covering damage to land vehicles and other means of transportation
iii) Disaster and Catastrophe Insurance- covering damage caused by natural disasters
iv) Liability Insurance- covering general liability losses
v) Large commercial risk insurance- covering damage such as ‘Sep 11' incident
vi) Pecuniary Insurance- covering credit risk and miscellaneous financial losses
(Please refer to Diacon & Carter (1992) for more details on the above mentioned types of insurance and other specific types of insurance)
As stated by Booth et al. (2004), the main providers of general insurance in the UK are public limited companies, mutual companies, trade associations, Lloyd's syndicates, and insurance companies. These companies are known as direct insurers (an exception in the Lloyd's syndicates case).The basic framework of general insurance revolves around providing payments in the situation where a loss is incurred through a financial event, referred technically as perils. Proper research on how to account for the severity of these payments has made General Insurance a popular area of interest, and undoubtedly, led to extensive study and research on this topic.
Claims Reserving Policy For Claims Already Incurred
The main objective of an insurer is to prepare a sufficient technical provision which embeds future uncertainties as well as the profitability factor. Thus, this is by no means easy due to the number of unknown factors that should be taken into consideration.
In the United Kingdom, a few consultation articles regarding risk-established capital techniques have been published where general insurers are advised to assign risk claims factor to outstanding insurance claims in the process of calculating Enhanced Capital Requirement. (For full article, please see links to CP136 and CP190 under References)
Here we look at the building blocks of the arrival of claims and techniques used to create sufficient provisions. For General Insurance, there are various techniques available to identify a suitable claims reserving policy. Amongst it are deterministic Chain-ladder technique, Bayesian Models/Bornhuetter Ferguson, Mack's model and many more.
Before proceeding to understanding the basics, we need to establish a few facts about insurance claims. Claims are measured via frequency and severity. In other words, we need to estimate number and sizes of claims separately. The general idea is to assume some prior knowledge about the distribution of claims and how it behaves. Next, we apply certain deterministic/stochastic techniques and find the best estimates of each parameter. Method of Moments and Maximum Likelihood estimates are some of the more popular techniques used to determine a best estimate (for more clarity, see Klugman et al. (2004)). In conjunction to prior knowledge about claims, subjective judgement is also needed in selecting a suitable value for the respective parameter. On this occasion, historical data provides a useful guideline for the range of values the parameter can take.#p#分页标题#e#
Chain-Ladder Model
Although this is a deterministic model, it is nevertheless important. This is purely because Chain-Ladder Model serves as a foundation for more complicated stochastic models that are going to be discussed in subsequent sections. Before moving on to the proper procedures, we need to make few important assumptions about this model;
i) Stable and stationary development of claims in the past and future.
ii) No change in inflation rate.
iii) Composition of insurance portfolio remains constant for all periods.
We define to be the claim with year of origin i, and year of development j.
The next step is to calculate the cumulative claim amounts using:
For the most basic form of chain ladder, ignoring inflation and other factors, the development factor is obtained through the formula:
Next, the cumulative development factors will be:
As a result, cumulative claim payments for development year k can be obtained by applying the cumulative development factors respectively.
Finally, to calculate the estimated reserves in a particular year, we use:
Hence, the total reserves will be sum of all individual reserves
(For more in-depth explanations regarding this method, please see Booth et al. (2004))
Bayesian Model/Bornhuetter-Fergusson Method
To obtain a better perception of Bornhuetter-Fergusson method, concise understanding of Chain-Ladder is required.
As stated by Mack (2000), the Bornhuetter-Fergusson technique essentially replaces the ultimate claim estimates in the chain-ladder approach into a different estimate based entirely on outside information and expert judgement. The usage of external information to predict the estimates expectedly leads to a Bayesian Model. This is because both the Bayesian Model and Bornhuetter-Fergusson technique assume prior knowledge about the distribution of the estimates. Several existing prior distributions can be used to model claim sizes although Gamma distribution is generally accepted to be the norm.
By using this technique over all policies, each independent gamma distribution incorporates a stochastic element. However, the difference between each policy depends on the estimated parameters,
In brief, Bornhuetter-Fergusson technique principally assumes perfect prior information about the underlying distribution of the empirical data where else chain-ladder approach assumes the converse.
where
For practical purposes, we need an equilibrium point between both techniques. England and Verrall (2002) suggested that we can compare theoretical predictive distribution of the data with the methods described above. Using an overly-dispersed negative binomial model in predicting the distribution of the empirical data, the theoretical mean of the model results in a formula in the form of
where
As can be seen, a natural trade off between the two methods of estimation is obtained. This is also the form a credibility formula to calculate credibility factor. As stated by England and Verrall (2002), governs the trade-off between prior mean and the data. Hence the choice of should be chosen with precision in regard to the initial estimate for ultimate claims.#p#分页标题#e#
In short, we must choose a suitable estimate using prior experience to categorise policyholders. An appropriate credibility premium can then be charged based on number of years for which data are available. (For more comprehensive understanding regarding the predictive distribution and predictive error of the outstanding, please refer to England and Verrall (2002))
Mack's Model
Mack (1993) proposed that estimates of standard errors can be obtained using an approach that is independent of the distribution underlying the claims. The benefit of this model is that it does not make unrealistic assumptions of the underlying distribution of claims and the development factors.
As summarised by England (2009), the specified mean and variance are as follows:
where is defined as the cumulative claim with year of origin, i, and year of development, j
Using the above equations and the development factors, calculated in the same way as previously defined, we could obtain a squared run-off triangle that can be used to estimate future reserves.
To include the variability factor, we let
By doing so, we are incorporating estimation variance and process variance into the future reserves. Therefore, the mean square error of the reserves of any given year of origin i, as stated by Mack (1993) is:
It should be noted that the residual used in estimating the scale parameters in Mack's model is consistent with assumption of a weighted normal regression model. Therefore, it is a reasonable estimation.
General Insurance - Premium Pricing
Overview
As described earlier, premium can be viewed as the market value of insurance that maximises the wealth of the insurer. At the very least, it should make the policy sustainable even in harsh times. Generally, a premium function must be established in such a way that the solvency of an insurer is assured. The main requirement is for function to be swift enough to manage incoming claims.
However, too high of a value in comparison with rival insurance companies will result in an undesirable outcome. Taylor (1986) developed the fundamental concept of how competition in premium rates has considerable impact on the insurer's strategy. (Please see Taylor (1986) for the full literature)
Simple Capital Asset Pricing Model (CAPM)
Pioneers of this simple, yet fundamental approach were Cooper (1974) and Biger et.al (1978). Premiums obtained using this model reflects valuation in perfect capital markets which is by no means realistic. Despite so, this idea remains to be of much theoretical use because it forms a foundation in many insurance pricing models.
Cummins (1990) states that the derivation of this model begins from:
where Y is the net revenue, I is the revenue from investment, μ is the underwriting profit, is the rate of return on assets, is the rate of return on underwriting, and A and P are assets and premiums respectively.#p#分页标题#e#
The equation is then divided through by equity to obtain the rate of return on equity.
where E is the equity and is the rate of return on equity.
Considering the relationship between assets, liabilities and equities, we can express the rate of return on equity into a more useful form:
However, rate of return is not very practical in the real world due to limited economic usability. Thus, using CAPM as the pricing model, an alternative expression can be obtained by replacing the rate of return with the coefficient of the risk premium factor, where is the rate of return on asset B and is the rate of market return. Thus, the new equation is as follows:
where is the beta of equity, is the beta of assets and is the beta of underwriting
Hence, using the equilibrium rate of return on equity principle and equating it to the expected rate of return on equity and solving for, the resulting expression yields:
where is the expected rate of return on underwriting, is the expected rate of return on equity and is the risk-free rate.
The final equation is often referred as insurance CAPM which is slightly different that CAPM for bonds. (For proof on Insurance CAPM, please see Cummins (1990), pg 150-152)
Insurance CAPM possesses some insightful features such as incorporation of different risk factors. However, it does not take into account any interest rate risk. The usage of liabilities over premiums is only a crude time-homogenous estimation of the payout tail which is unrealistic. In any case, it is too elementary to model real-world situations.
Myers-Cohn Model
Due to the simplicity of Insurance CAPM, further insurance pricing models have been brought forward. One of them is Myers-Cohn model. It uses the concept of net present value to determine underwriting profit. In United States of America (USA), Myers-Cohn model is being used extensively to set provisions for the property-liability insurance industry.
Brealey and Myers (1988) first proposed the idea of using adjusted present value which can be summarised into the followings:
i) Estimation of cash inflows and outflows
ii) Application of risk-adjusted discount rate for every single cash flow.
iii) Calculation of discounted cash flows.
iv) Accept the policy if NPV is positive, otherwise further measures need to be taken.
The procedures might seem trivial but complications arise from choosing appropriate risk discount factors for the respective cash flows. The extension of adjusted present value to include extra real-world elements such as corporate tax forms the Myers-Cohn model.
In order to derive the general formula of Myers-Cohn, we start by considering a two period model with cash flows at time period 0 and 1. In this model, we need to assign discounting factors to each inflow and outflow. They include discounting premiums at a risk-free rate, discounting losses at an appropriate adjusted rate, and discounting underwriting profits at both a risk-free and risk-adjusted rates in respective portions.#p#分页标题#e#
Performing the steps described above and simplifying the expression, Cummins (1990) obtained the following expression:
where P is the premium, E( L + e ) is the expected losses, is the risk-free rate, to be adjusted risk factor , is the corporate income tax rate, and to be the surplus over premium.
From the equation above, we are able to deduce that positive risk premium suggests lower premium and vice versa. Although an explicit expression for the premium has been obtained, this is not so useful because it fails to consider the element of market risk.
Myers-Cohn Model Using CAPM
In this section, we fuse the concept of CAPM and Myers-Cohn model in the more general situation whereby investment balance for tax and underwriting profit are included. The resulting expression as pointed out by Mahler (1998) is as follows:
where represents present value discounted at risk-free rate, r, represents present value discounted at adjusted risk factor, represents premium after tax, represents Investment Balance discounted at a risk-free rate that has been taxed, and is the underwriting profit. All other variables are defined as before. [Note that this may be slightly different from what we have defined in section 3.3. Reason being that in this section we include all the real-world elements such as corporate tax and investment balance after tax]
Rearranging the above equation yields two important results:
and
where is the risk-adjusted rate for L + E, is the risk-free discounted rate for P, is the risk-free discounted for IB, is the risk-free discounted rate for U, is the risk-adjusted rate for U after tax, is the risk-free discounted rate for revenue, is the revenue offset rate for tax, and is the provision to be held.
Hence it is popular among the property and liability insurance companies in USA to set as the target provision.
(For more information regarding the derivation of this result and a real-life example with numerical solutions, please see Mahler (1998), pg 728-731, Exhibit 5)
Validation Methods
Looking at different classes of pricing approach, we should not neglect the fact that premiums should be fair. Fair in this context refers to premiums being forward looking, as stated by Harrington and Niehaus (2003). (The explanation of forward looking is on pg 149-151, Harrington and Niehaus (2003))
Thus, we should investigate whether the premium obtained is a reasonable figure. A basic method of checking, as proposed by England (2003) is to use the expected claims and add a few risk adjustments, usually standard deviation in most cases. Certainly, premiums charged should not be lower than the risk-adjusted expected loss, but also not on the other extreme as to making it uneconomical and unethical to the insured.
Another method of validation proposed by Wang (1999) has been used extensively. His method uses proportional hazards model to calculate a risk-adjusted price through the control of the parameter, ρ. Essentially the loss distribution is transformed by raising it to the power of 1/ ρ. Wang's method can be easily applied in excel spreadsheets and the parameter, ρ could be manipulated to obtain the true risk-adjusted price. However, choosing a value of ρ could prove to be very subjective.#p#分页标题#e#
Simulations
Overview
As stated by Daykin, Pentikainen and Pesonen (1996), “Modern computer simulation techniques open up a wide field of practical applications for risk theory concepts, without the restrictive assumptions, and sophisticated mathematics, of many traditional aspects of risk theory”
A brief overview in producing a stochastic model is to calculate all possible outcomes and modelling the variability of the situation using suitable amount of parameters. Subsequently, estimation of parameters is performed and values are inserted into software such as Excel whereby thousands of simulations are run. These outcomes are then treated as observations of various random variables whereby the most appropriate probability distribution function is fitted.
Before simulating, we need to consider possible statistical distributions for the frequency and severity of claims amounts, as described above. Judgement plays a very important role in finalising a suitable value for the chosen parameter. Historical data of losses can be used (where available) to provide a suitable distribution for simulation purpose.
Monte Carlo Markov Chain- Gibbs Sampler
A famous simulation approach known as Monte Carlo method has been attracting much attention in the actuarial community. In this approach, a class of algorithms are repeatedly produced using the methodology described above. However, Christofides et al. (1996) raised concern regarding a significant drawback of the pure Monte Carlo approach; cumbersome calculations and multiple assumptions needed to calculate conditional distributions. For instance, due to the effect of time, we would need to recalculate, say, conditional mean at different time points because the initial conditional mean had been calculated stochastically.
Hence a variation of the pure Monte Carlo approach has been put forward commonly known is Markov Chain Monte Carlo (MCMC). Basically, MCMC uses simulation methods, such as iteration, to acquire a simulated posterior distribution of the random variables. Adopting MCMC could be difficult for those who have an analytical mindset mainly because of the purely numerical solutions obtained as opposed to the formulae and assumptions used in a Bayesian Model. Nevertheless, MCMC provides solutions to problems that are intractable analytically.
Evidently MCMC techniques are used to obtain a predictive distribution of the unobserved values of underlying parameters through a process of computer simulations. This in turn demonstrates the usefulness, simplicity, and sophistication of using MCMC, because the derivation and evaluation of complicated formulae into predicted distribution and prediction error has been made redundant. England and Verrall (2002) suggested that using MCMC in a claims reserving context, a distribution of future ultimate claims in the run-off triangle can be obtained unquestionably. The respective sums of the simulated amounts are calculated to provide predictive distributions for different years of origin. Thus, we are able to obtain total estimated reserves. Consequently, obtaining best estimates would be trivial and this reduces the problem into a deterministic nature.#p#分页标题#e#
First of all, to formally perform an MCMC method, Scollnik (1996) suggests that we need to first determine an irreducible and aperiodic Markov chain with a distribution, identical to the target distribution. The following procedure is to simulate one or more realisations in order to develop dependent sample paths of the target distribution. These sample paths are then used for inferential reasons with the following asymptotic results, stated by Roberts and Smith (1994):
andwhere are the realisations of the Markov Chain with distribution(英国dissertation网http://www.ukthesis.org/)
By using the first equation, with t between 10 and 15, we are able to produce an approximate independent random sample from the distribution with probability density function, g(x), by choosing the value in each of the sequence.
Subsequently, the second equation, according to Scollnik (1996), informs us that “if h is an arbitrary -integrable function of X, then the mean of this ergodic function converges to its expected value under the target density when t approaches infinity.” (For more information regarding the use of MCMC method, please refer to Scollnik (2001))
Up till recent years, the widely-used MCMC method known as Gibbs sampler has been very practical. Gibbs sampling is well accepted due to the fact that the conditional distributions of the target distribution, claims in our case, can be sampled exactly. This also leads to the fact that it does not require any ‘tuning'. By understanding the basic backbone of MCMC method, we can proceed on to using the Gibbs sampler to produce a Markov Chain when the target distribution, is known. Actuarial modelling using Gibbs sampler was first recognised by Gelfand and Smith (1990).
To officially set up the Gibbs sampler, we start by changing the target distribution, to a joint distribution, assuming that this distribution is real and proper. Every single term may be used as a correspondence to either a single random variable or a group of independent random variables. Next, we let denote the marginal distribution of the group of variables, and let, denote the full conditional distribution of the group of variables, where the remainders are known. Following which, we can use the Gibbs sampler to take advantage of the full conditional distributions that are associated with the target distribution to properly define an ergodic Markov Chain that has the same distribution as the target distribution. Here, the Gibbs sampler refers to an implementation of a certain iteration process. The algorithm is as follows:
i) Pick suitable initial values,
ii) Fix the counter index k to be equal to 0
iii) Start the simulation by simulating sequences of random draws:
where the iterative pattern forms the following equality:
iv) Set and re-iterate using step iii)
An explanation for step iii) is that, we are required to perform random sample draws from each of the full distribution so that the values of the conditioning variable are updated in a proper sequence. Using this methodology, the target distribution will still be identical to that of the Markov chain defined under mild regularity conditions. Hence, we have simulated a claim amount for every full distribution which in turn forms a proper set of data when generated repeatedly. (The above is just a summary of the Gibbs sampling method, please see Scolnik (1996) and Roberts and Smith (1994) for clearer explanations)#p#分页标题#e#
In the practical world, there are various types of MCMC-based algorithms, some simpler and cheaper to implement but less practical, and others, more cumbersome and costly from a computation perspective but more realistic. Regardless, their main purpose ultimately is to approximate the target distribution.
Bootstrapping
In simple terms, bootstrapping is primarily used to analyse the variability of a set of random observations. This technique centres around re-sampling of past data to generate huge blocks of pseudo observations. In a claims reserving context, the bootstrapping method can easily be applied to stochastic models such as Mack model which will be discussed later in more details. (More bootstrapping techniques and on other stochastic models can be found in England and Verrall (2001), England and Verrall (2002) and England and Verrall (2006).)
Many practitioners use bootstrapping because of the ease at which it can be applied in a computer. Bootstraps estimates are easily obtainable using Excel and obtaining a predictive distribution is no longer complicated. Although there are inevitably drawbacks using this approach such as a small number of pseudo observations may be not be compatible with the underlying model, nevertheless, it continues to be of great practicality.
To start off, bootstrapping basically assumes observations to be independent and identically distributed. England and Verrall (2006) then summarised the procedure of bootstrapping into 3 different stages:
Stage 1 requires calculation of fitted values.
Stage 2 requires formation of blocks of pseudo observations from the original data set. Residuals are obtained by taking the difference of the fitted values against the original data. Bootstrapping is possible because residuals are independent and identically distributed. Next, they are adjusted and normalised using methods such as Pearson's formula. The adjusted residuals are then iterated N times to form a new group of pseudo observations. The statistical model is then re-fitted using the new data set.
Stage 3 requires forecasting future claim amounts using the re-fitted observations. Any process error will need to be incorporated.
The resulting product will be used to estimate a predictive distribution for claims. The mean of the stored results should be compared to a standard chain-ladder reserve estimate to check for inconsistencies.
Bootstrapping Mack Model
For application purposes, clear understanding of bootstrapping is required. Bootstrapping can be performed on models such as the overly-dispersed Binomial model and Mack model.
Procedures to bootstrap Mack's:
i) Produce a standard cumulative run-off triangle and calculate future claims using cumulative development factors as described above.
ii) Generate a list of crude residuals and use Pearson's formula to standardise it.
iii) Re-sampling of the residuals is then performed through replacement.#p#分页标题#e#
iv) Produce a run-off triangle with the new pseudo observations.
v) Calculation of new development factors using newly obtained pseudo observations.
vi) Simulate future claims by sampling from process distribution, incorporating process variance.
vii) Step iii) to step vi) are repeated N times to obtain a simulated reserve for each period.
(For an illustration of bootstrapping Mack model, please see England (2009), slide 24)
Although we have constructed a method to estimate reserves that is inclusive of standard errors, there is still an element of ambiguity in the number of parameters used in the formulation of Mack's formula. The amount of parameters chosen should always be parsimonious. England and Verrall (2006) have adopted a bias correction to Mack's model to enable a direct comparison of results when bootstrapping Mack's model to check for inconsistency.
Model Validation
Stochastic models are not universal, different models are suited for different data sets. Regardless of whether simulated values are realistic, there is still the necessity to perform model validation to ensure that the techniques used is consistent with the real world and also to eliminate the element of error.
One of such methods is to run the simulated model over multiple scenarios, technically known as scenario testing, to account for all possibilities in a number of sensible time periods. Huge multinational companies have a target of 30,000 or more scenarios.
Schneider (2006) reckons two other alternatives whereby the aggregation of results at various levels within projection runs is used to determine appropriate overall policyholder distributions. It may also be advisable to aggregate results into different dimensions (for example: geographically or by cohorts of clients).
Sensitivity analysis is also crucial to actuarial models. This technique describes the sensitivity of each result by making slight changes in value of inputs. (For information regarding sensitivity analysis on Myers-Cohn model, please see Mahler (1998), pg 718-721,770-772)
Drawbacks
Schneider (2006) raised a very good point concerning a flaw in excel spreadsheets. He suggested that as time progresses, spreadsheets used to produce simulations will likely become less acceptable to parties that require that data. This is due to the increased risk in human error. Having performed numerous calculations on the same spreadsheet by the same people, subsequent spreadsheets that are linked to it are deemed to inherit the error as well.
Additionally, Christofides et al. (1996) also pointed out that any model, no matter how well-built, will eventually fail to capture some of the real-world events. Thus, constant updates to model should be made. Certainly, this is ignoring the fact that we are putting the cost element aside. However, real-world features may be omitted when it is of no primary importance.
如果你对economic essay#p#分页标题#e#感兴趣,请查看http://www.ukthesis.org/Thesis_Writing/Economics/。
相关文章
UKthesis provides an online writing service for all types of academic writing. Check out some of them and don't hesitate to place your order.