Risk Measurement in Commodities Markets Using Conditional Extreme Value Theory


  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Risk Measurement in Commodities Markets Using Conditional Extreme Value Theory


1Business, Economics Statistics Modelling Laboratory (BESTMOD), Faculty of Economics and Management, Sfax-Tunisia


The aim of this paper is to quantify risk in oil, gas natural and phosphates markets by the Value at Risk and Expected Shortfull using McNeil and Frey (2000) two-steps approach based on the combination of the theory of extreme values and the GARCH model. A comparison is made between this method and various conventional methods such as GARCH models, Filtered hsitoriacal simulation, unconditional EVT-POT and unconditional EVT Bloc. Particular attention is given to study the quality of VaR forecasts obtained from conditional EVT method. The results we report show that this method is the best one for quantile superior to 99%. In all other cases, it offer acceptable VaR’s forecasts but not statistically better than GARCH methods.

At a glance: Figures

Cite this article:

  • GHORBEL, Ahmed, and Sameh SOUILMI. "Risk Measurement in Commodities Markets Using Conditional Extreme Value Theory." International Journal of Econometrics and Financial Management 2.5 (2014): 188-205.
  • GHORBEL, A. , & SOUILMI, S. (2014). Risk Measurement in Commodities Markets Using Conditional Extreme Value Theory. International Journal of Econometrics and Financial Management, 2(5), 188-205.
  • GHORBEL, Ahmed, and Sameh SOUILMI. "Risk Measurement in Commodities Markets Using Conditional Extreme Value Theory." International Journal of Econometrics and Financial Management 2, no. 5 (2014): 188-205.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Commodities prices fluctuations have an important influence on commercial balances and economies of exporting and importing countries. Several events in the world have led to major disruptions for the price of the raw material during the last four decades. Volatility in the global markets of raw materials, quantification and mitigation of price risk presents a number of challenges due to the temporal dependence in volatility, nonlinear dynamics and heavy tails in the distribution of returns. In this environment unpredictable, unstable and risky, protection against market risk has become a necessity. It is therefore important to model these price fluctuations and implement an effective tool for managing price risk of the raw material. Value at Risk has become a popular measure of risk in the financial industry. It answers the question how much you can lose with a given probability over a given time horizon. It is a number that indicates how much a financial institution or investor can lose with a given probability over a given time horizon. VaR has become the standard key and measure that financial analysts use to quantify the risk. In addition, it helps portfolio managers to determine the policy of the most appropriate risk management of any given situation. The Basel Committee on Banking Supervision (1996) to the Bank for International Settlements on settlements requires financial institutions like banks and investment firms to meet capital requirements based on VaR estimates. The great popularity of VaR comes from the aggregation of several components of risk within the firm and the market in a single figure. When VaR is calculated under the hypothesis of normality of returns, risk will be overestimated especially for the higher quantiles. Many researches show the importance to model return series by adequate distribution or time series model to obtain more accuracy VaR forecast. As risk management is a management of rare and more extreme events, there are tendencies to model extreme events and to calculate VaR of higher quantiles using Extreme Value Theory. Extreme Value Theory has been applied in many areas (hurricane, earthquake or flood, floods, financial crises, crashes, oil shocks) where extreme values may occur. EVT is now a very active field of research. It provides a solid framework to formally study the behavior of outliers. It focuses directly on the tails of the distribution of the sample and, therefore, could potentially perform better than other approaches in terms of prediction of unexpected changes in extremes. The application of EVT to return series is inappropriate because they are not independently and identically distributed. To overcome this drawback, McNeil and Frey (2000) proposed a combined approach that reflects two stylized facts exhibited by most financial return series: stochastic volatility and fat tailed conditional distributions of returns. To the best of our knowledge, the majority of earlier studies have applied this approach to calculate the VaR of stock indexes. The Frey and McNeil (2000) approach to VaR has been rarely applied to commodities and raw material. Our aim in this work is to study the predictive performance of Conditional EVT method to estimate risk of three commodities markets: Oil, gas natural and phosphates. We will compare it with conventional and traditional VaR methods.

The remainder of this paper is organized as follows. In the next section, we review briefly the main empirical studies that treated the issue. Main conventional VaR methods used in our empirical study are reported in section 3. In Section 4, we review the concepts of Extreme Value Theory. In section 5, we offer a brief presentation of kupiec’s and Christofferssen’s backtesting tests. Section 6 describes the data, presents an empirical analysis of different methods and examines the predictive performance of conditional EVT method to estimate oil, gas natural and phosphate risk market. Section 7 delivers final remarks and concludes.

2. Predictive Performance of Conditional EVT: Literature Review

The extreme value theory, whose paternity can be attributed to Fisher and Tippett (1928), had its main development through generalized framework proposed by Gumbel (1958). This theory has been used since long in several fields such as climatology, hydrology, météologie, industry, biology. .. It was introduced in finance and insurance as a tool for financial risk management recently at the end of the twentieth century, following the seminal work of Embrechts, and Klueppelberg Mikosh (1997) and Reiss and Thomas (1997). Extreme value theory (EVT) has been used to analyze financial data showing clearly not normal behavior.

The estimate of value at risk has become an important task in the management and measurement of risk for banks and financial institutions especially after the decisions of the Basel Committee, which requires all institutions to be able to cover their commercial portfolio losses over a period of 10 days and in 99% of cases.

Traditional models for estimating VaR interested to model all the empirical distribution of returns that is different from the distribution of extreme events or outliers. Management of rare events demands high quantile estimation of distribution that is not directly derived from the raw data. The extreme value theory, statistical technique for estimating high quantiles of a distribution based on a limited number of observations, represents an interesting solution to the problem. Several reasons have led practitioners in the field to adapt and employ this concept in risk assessment which include:

•  The return distributions have fat tails, asymmetric and powerful tails. To approximate asymptotically the tails of the distribution, it is necessary to estimate the highest quantiles based on a limited number of observation.

•  Rare and extreme events may be normal or traditional dynamic of markets. In this case, the characteristics of the distribution of returns may change significantly and a separation between the tails of the distribution and the remaining observations is useful for a better estimation of higher quantiles (Neftci, 2000).

Daniel's et al. (1998) suggested that a suitable model for measuring the value at risk should represent accurately extreme events. Therefore, the latest researches on VaR are devoted to modeling extremes and many practitioners have suggested the use of statistical techniques developed for the analysis of extreme variables. If we take into account extreme observations to derive the distribution tails of the random variable we can obtain risk estimates more efficiently and more accurately than those obtained when we model the entire distribution.

Danielson and de Vries (1997) compared the predictive performances of several methods for estimating VaR for portfolios composed of seven simulated U.S. equities.The results showed that EVT provides a good estimate of risk specially for the tail of the distribution. The Variance-Covariance method (VC) and historical simulation method underestimates and understates the risk respectively.

The same result was found by Longin (2000) suggest that the calculation of VaR using GARCH models reflects volatility level of the VaR at a given moment but it does not take into account the implementation of risk of extreme events due to unexpected changes in market conditions.

McNeil and Frey (2000) and Bali (2003) have shown empirically that the parametric models used under the assumption of normality of observations are not efficient for estimating VaR during periods of crisis or major disruptions and they end up in the most cases to an underestimation of risk. Danielson and Morimoto (2000) confirmed, using Japanese data, the performance of the EVT as a tool for measuring and managing risk and its superiority compared to the traditional GARCH.

Several other studies have also do a comparison between EVT and GARCH models such as Yami and Yossiba (2005), Kuester et al. (2005), Acerbi (2002), Inui and Kijima (2005) and Martins and Yao (2006). They showed that EVT is the best method to measure and estimate the risk for the highest quantiles.

In contrast, Lee and Saltoglu (2001) used several loss functions to compare different methods and showed that EVT is not very effective in estimating the risk of five Asian stock market indices. Conventional GARCH models with innovations that follow normal or Student distributions give superior performances but none of these models has produced similar performance for all markets. Selection of model that estimates more efficiently risk depends of the characteristics of the market in question, the estimation period and loss function used.

McNeil and Frey (2000) have proposed a two-step method to estimate VaR and ES which consists of the combination of the theory of extremes and GARCH models. The conditional quantile is estimated in this case under the assumption that the distribution of standardized residuals obtained from a GARCH model can be approximated by a generalized extreme value distribution (GEV). The extreme value theory is applied to the standardized residuals of the GARCH and not to raw data. Assaf (2006) estimated the tails of the conditional distributions of four markets returns (Egypt, Jordan, Morocco and Turkey) and he showed that these distributions have fat tails which must be modeled and estimated by Extreme Value Theory. Kiesel et al. (2000) followed the approach of McNeil and Frey to estimate VaR bond yields in emerging markets. They concluded that for confidence levels commonly used in risk management framework, the EVT produces a risk assessment similar to that obtained from the empirical distribution. The superiority of EVT for high quantiles is not always true since this method may produce, and consecutively VaR estimates lower than losses realized (several consecutive violations).

Cifter et al. (2007) used conditional EVT to estimate Value at risk and the expected Shortfall for the distribution of interest rates in Turkey and found that this method can improve the quality of forecasts. Gençay et al. (2003), Gençay and Selçuk (2004), Altay Kucukozmen (2006) and Eksi et al. (2006) have used the EVT to estimate the tail of the distribution of returns of Turkish stock market (ISE) and they also concluded that the EVT is the most powerful among all the methods used in theses studies for the highest quantiles.

Ozun and Cifter Yilnazer (2007) estimated the VaR and ES for the ISE index using eight models. These models are compared with the normal GARCH, FIGARCH and GARCH-t based on several tests and evaluation criteria such as the Kupiec test, Christoffersen test and Lopez test.

The application of EVT on filtered data has significantly improved the quality of forecasts of VaR compared to the estimates obtained by the GARCH family models.

Tolikas and Brown (2006) use the EVT to examine the asymptotic distribution of lower tail of the distribution of daily returns on the Greek stock index over the period from 1986 to 2001. They found that the parameters of the distribution vary according to the change of behavior of the tail distribution over time. Ghorbel and trabelsi (2008) applied the approach of McNeil to measure the risk in equity markets Tunisian and French. They concluded that the best method of measuring risk depends on market behavior studied. In the case of a flat market in extreme events are rare GARCH models are more perform in calculating VaR. By cons, if the market is very influenced by the crisis and save significant price fluctuations in the EVT-GARCH model is superior to others.

Richard et al. (2010) used a parametric approach to the estimation and forecasting of Value-at-Risk (VaR) and Expected Shortfull (ES) for a series of heteroscedastic financial returns. The data are GJR-GARCH model to take into account the leverage and asymmetry. The model is illustrated by its applications to four international stock market indices and two exchange rates, generating forecasts one step ahead-VaR and ES. The standard and nonstandard tests are applied to theses estimates, they concluded that the GJR-GARCH model performs better when it is compared with some competitive methods: in particular, it is only prudent risk model over the study period, which includes the recent global financial crisis.

Cifter (2011) used the extreme value theory (EVT) to estimate the value at risk. Wavelets and EVT approaches are combined to estimate volatility. This new method is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of the EVT method is compared with the Riskmetrics, EWMA, GARCH-ARMA and conditional and unconditional EVT methods. The author showed that the theory of extreme values increases the predictive performance of financial forecasts based on the number of violations and tail loss tests.

Ghorbel and Trabelsi (2009) proposed a method for estimating the VaR of a portfolio based on a combination of time-series models, the extreme value theory and the theory of copulas. Each series of returns is then modeled by a univariate ARMA-GARCH. In a first step, the method is applied to two portfolios, each consisting of two indices. In a second step, he generalized the method in a trivariate for measuring the risk of a portfolio of three indices. In this case, the dependence is modeled by a nested trivariate copula. He showed that methods based on conditional copulas extreme values can better model the dependence structure and provide better risk estimates compared with conventional univariate and multivariate methods.

Yang et al. (2011) have used CVaR and applies EVT to model the tails of the return series to estimate risk assets more accurately. They have applied the theory of copulas to capture the inter-dependence structure between assets and building the model Copula-GARCH-EVT, then they combined with Monte Carlo simulation methods and the average VaR for optimize the portfolio. For the empirical study is carried out by four Chinese stock market indices and the result is that the method of copulas can better characterize the structure inter-dependence between the assets and the performance of the GARCH-EVT method and CVaR method.

Ourir and Snoussi (2012) studied the impact of the carelessness of dependence on risk assessment of extreme liquidity of the stock market. They compared VaR estimates in case of Independence (variance-covariance approach, historical simulation VaR and adjusted to extreme values) with VaR estimates when the dependency is considered. The effectiveness of these methods has been tested and compared using backtesting tests. Results showed the importance of consider dependence to impove VaR estimates.

Stavroyiannis et al. (2012) used the VaR as a measure of potential economic losses in the financial markets. They built a GARCH model where the innovation process follows the Pearson-IV distribution. They considered daily returns for DJIA, NASDAQ Composite, FTSE100, CAC40, DAX and S & P500 indices. Methods for calculating VaR is performed by the coverage ratio, Kupic test, and Christoffersen’s test. They found that the GARCH model produces better results for four indices.

Schaumburg (2012) proposed a way to estimate and predict the conditional value at risk from a nonparametric model. A core monotonized two local linear estimator is used to estimate moderate (1%) conditional quantiles of the index return distributions. To extreme (0.1%) quantiles, nonparametric quantile regression is combined with the theory of extreme values. Probably due to its flexibility, performance-sample forecasts of the new model are superior to competing models. The results, however, it is concluded that the gains on the additional flexibility are substantial and quantile regression with nonparametric refinements EVT should be considered as a practical alternative for estimating and forecasting VaR. Alves and Santos (2012) estimated the threshold method based on the adaptation of a stochastic model for threshold exceedances were developed under the acronym POT (peaks over threshold). They proposed an approach based on a model in the POT framework that uses the durations between excesses as covariates to eliminate the tendency to agglomerate violations. The empirical study is conducted by three global stock market indices in the S & P 500, DAX and FTSE. They conclude that the POT models give better results than the widely used RiskMetrics model.

Marimoutou et al. (2009) used the value at risk (VaR) negotiating positions long and short oil market models using unconditional and conditional EVT to predict the value at risk. These models are compared to other performance modeling techniques well known, such as GARCH, historical simulation and filtered historical simulation. They found that the conditional EVT and filtered historical simulation method provides a major improvement over conventional methods. In addition, the model GARCH (1, 1)-t can also provide good results which are comparable to those obtained by combining the two methods. The authors emphasized the importance of filtering the data to the success of standard approaches.

3. Value at Risk and Expected Shortfall Methods

3.1 Value at Risk and Expected Shortfall as Risk Measurement Tools

VaR is defined as a quantile of the distribution of returns (or losses) of asset/ commodity or portfolio in question. Value-at-Risk (VaR) is the maximum amount of money that may be lost on a portfolio over a given period of time with a given confidence level, due to exposure to the market risk. It is considered also as the predicted worst-case loss at a specific confidence level over a certain period of time. Some practitioners prefer to make in consideration the negative of this quantile, so that higher values of VaR correspond to higher level of risk and it is the case in the present work.

Formally, let be the returns at time t where is the price of commodity at time t. We denote the (1-p) % quantile estimate at time t for a one-period-ahead return as , so that:


Value-at-risk’s popularity originates from its aggregation of several components of risk at a firm and from the market into a single number. More formally, VaR is calculated based on the following equation:


given that F-1(p) is the corresponding quantile of the assumed distribution, is forecast of the conditional mean and is the forecast of the conditional standard deviation for t+1 given information at time t.

VaR as a risk measure is heavily criticized for not being sub-additive. This means that the risk of a portfolio can be larger than the sum of the stand-alone risks of its components when measured by VaR. Therefore, managing risk by VaR may fail to stimulate diversification. In addition, VaR does not take into account the severity of an incurred damage event{1}. As a response to these deficiencies, Artezner et al. (1997), Artezner et al. (1999) and Delbaen (1998) have introduced the notion of coherent measures of risk. Expected Shortfall (ES) is a coherent measure of risk and in several variants has been proposed as remedy for the deficiencies of Value-at-Risk (VaR) which is not coherent. ES is defined as the expected of losses that are superior to VaR. It is calculated as follow:


with X is variable that design the amount of losses exceeded recorded between instant 1 and t..

If original data is auto correlated and filtered by AR-GARCH, ES is calculated following the same analogy that the Value at Risk


where are standardised residuals obtained after fitting an AR(1)-GARCH(1, 1) model.

We compare in the following between various competitive VaR methods such as: Historical Simulation (HS), Filtered Historical Simulation (FHS), GARCH, EGARCH, TGARCH, Conditional and unconditional EVT under Peak Over Threshold and Block Maxima versions.

3.2. Filtered Historical Simulation (FHS)

The historical simulation method assumes that historical distribution of returns will remain the same over time i.e. price change behaviour repeats itself over time. For more volatile and turbulent periods, this method could provide a very bad measure of risk as it is based on the assumption that the series under consideration is independent and identically distributed which is not the case in the majority of markets. Additionally, measuring VaR with this method is extremely sensitive to the choice of the sample length n.

In order to remedy some of the shortcomings of HS method, various studies used the filtered historical simulation method which combines volatility models with the historical simulation in order to lessen the problematic use of the last method. FHS consists on fitting a GARCH model to return series and Historical simulation to infer the distribution of the residuals. By using the quantiles of the standardized residuals and the conditional standard deviation forecast from a volatility model, the VaR number is calculated as


where is forecast of volatility at time t+1 using univariate GARCH-n model. In our empirical investigation, we assume that the volatility estimates and the corresponding quantiles are being generated via an AR(1) -GARCH (1,1) process.

3.3. GARCH Model

In the present paper, we consider the applicability of the AR(1)-GARCH(1,1) model to estimate volatility for the different energy commodities markets for the univariate context and the VAR(1)- MGARCH(1,1) for the multivariate context.

The conditional mean and variance equations of AR-GARCH model under the hypothesis that innovations are normally distributed can be expressed as


where denotes return at time t, is the conditional mean, is the conditional variance at day (t+1) and are non- negative parameters with the restriction that the sum are less than one to ensure stationarity and the positive of conditional variance as well. The log-likelihood function can be expressed as


where is the parameter vector of the AR- GARCH-N model. In this case, VaR can be computed as


where denotes the percentile at p for standard normal distribution.

According to Bollerslev (1986), utilizing student t distribution as the conditional distribution for GARCH model is more satisfactory since it shows thicker tail and larger kurtosis than normal distribution. Under the hypothesis that innovations follow student t distribution, VaR can be computed as


where denotes the percentile at p for standard student distribution with v degrees of freedom.

3.5. EGARCH Model

In the GARCH model, signs of residuals or shocks have no effect on the conditional volatility, only squared residuals are entered into the conditional variance equation. However, a fact of financial volatility is that bad news (negative shocks) tends to have a larger impact on volatility than good news (positive shocks). Bad news tends to drive down the stock price, thus increasing the leverage of the stock and the stock will be more volatile (Black, 1976). Nelson (1991) proposed the following exponential GARCH (EGARCH) model to take into consideration the leverage effects:


Unlike the GARCH model, no restrictions need to be imposed on the model’s estimation, since the logarithmic transformation ensures that the forecasts of the variance are non-negative. Note that when is positive, or there is a good news, the total effect of is ; in contrast, when is negative, or there are bad news, the total effect of is .

The one- step- ahead conditional variance forecast, for the GARCH (m, q) model is given by


For the EGARCH (m, q) model, the one-step-ahead conditional variance forecast is given by:


4. The Extreme Value Theory

EVT is a powerful and yet fairly robust framework in which to study and offer a parametric form for the tail of a distribution, it can be conveniently thought as a complement to the central limit theory: while the latter deals with fluctuations of cumulative sum, the former deals with fluctuations of sample maxima. In the following, we present two approaches to study extreme events. The first one is a direct modelling of the distribution of minimum or maximum realizations. The other one models exceedances over a particular threshold. In addition, we present Mc Neil et al. (2000) approach called conditional EVT that is used to estimate tail-related risk measures in the case of heteroskedastic financial time series.

4.1. Block Maxima Method

Using the Fisher Tipplet theorem, it can be shown that, for a floppy class of distributions, the normalized sample maxima (i.e. the highest values in a sequence of iid random variables) converge to the Generalized Extreme Value distribution with rising sample size. If are iid random variables from an unknown distribution F, and let and be adequate normalization coefficients, then for the sample maxima = max ():


Where H(x) denotes the GEV distribution which is defined as follows:

Depending on the parameter ξ, The GEV includes three extreme value distributions as special cases. The distribution F is classified as a fat tailed (ξ > 0), and it is a Frechet distribution. If ξ = 0, F is a Weibull distribution and it has a thin tailed. Finally, if ξ = 0, the distribution is Gumbel and it has a short tailed.

The Block Maxima Method (BMM), suppose that we have a data series typically having series of maxima for a fixed block size n from an underlying distribution F, which is supposed to be in domain of attraction of H. For some ξ, if the data is series of iid variables, it can be understood that the true distribution of n block maximum can be approximated for large n by a GEV distribution.

The BMM employ this idea to fit the GEV distribution to a data series holding block maximum for an equal period n. The parameters for the GEV fit (, , ) are estimated by maximum likelihood estimation (MLE).

4.2. Peak over Threshold Method

The distribution function is called the conditional excess distribution function (cedf) and is defined as the conditional probability:


where X is a random variable, u is a given threshold, y=x-u is the excess over u and is the right endpoint of .


for . is the tail index.


The function can be estimated non parametrically by where n is the total number of observations and represents the number of exceedences over the threshold u{2}. After replacing by , we get the following estimate for :


By inverting this expression, we get an expression for (unconditional) quantiles associated with a given probability p:


Much previous works have studied the performance of EVT methods in VaR estimate and shown that unconditional models produce VaR forecasts that react to changing market conditions slowly. Unconditional extreme value estimates are generally higher and are considerably less volatile than the GARCH models. The rolling samples do not generate substantial change of the data set of extreme observations and as a result the unconditional VaR estimates are almost time-independent. Unconditional EVT models are more suitable for long-run forecasts of the extreme losses rather than being a day-to-day tool to measure the market risk. As a result, Mc Neil and Frey (2000) propose a two-step approach that consists to fit an AR- GARCH model to original return series before using GPD to infer the distribution of the residuals.

4.3. Conditional EVT

To estimate the value at risk, you can follow the two-step approach proposed by McNeil and Frey (2000) called the conditional extreme value theory (EVT conditional):

Step 1: Model the series by a GARCH family model and estimate the model parameters by maximizing the log-likelihood and assuming that the innovations are normally distributed.

Step 2: We consider the standardized residuals calculated in step 1 as realizations of a white noise process and estimate the tails of the distribution of innovations using the extreme value theory to then calculate the quantile innovations for 1 - p <0.95.

The first step is to filter the financial time series using GARCH, while the second is to apply the EVT for non-raw data representing yields but for the standardized residuals from this model before calculating the value at risk for standardized residuals and deduct the value at risk for returns. We assume that the returns of an asset may be expressed as follow:

with are parameters to be estimated, is the return at t-i and are innovations with mean 0 and variance 1 and cumulative distribution function. We assume that the conditional variance is and that residuals are given by and follow a GARCH (p, q) process:


Under the assumption of normality of the innovations, the log-likelihood function is given by:


Standardized residuals can be computed after maximizing and parameter estimation is obtained:


A prediction step of the conditional variance at t +1 is given by


And with .

Horizon for a period, an estimate of the value at risk VaR for a given probability p is as follows:





Frey and McNeil (2002) summarize how all standard models may be recast as Bernoulli mixture models and in this way obtain a common mathematical representation that greatly facilitates their comparison.

5. Backtesting Tests

5.1. Kupiec’s Test

These tests are used to evaluate the adequacy of realized VaR forecasts in a risk management environment. It is well known that there are many sources of error in VaR figures: sampling errors, data problems, inappropriate specification, model error, etc. All these factors will often cause our estimate to be biased. Various methods and tests have been suggested to evaluate the adequacy of realized VaR forecasts in a risk management environment. In this paper, statistical adequacy will be tested based on Kupiec’s and Christoffersen’s backtesting measures. Let be a sequence of VaR violations that can be described as and therefore be the number of days over a T period that the portfolio loss was greater than the VaR forecast. The violation ratio is defined as the number of times where the realized return is greater than estimated return (number of violation) divided by the total number of forecasts VR=N/T. An accurate and correct model is obtained when the expected violation ratio is equal to . At qth quantile, the model predictions are expected to underpredict the realized return = (1-p) percent of the time. A high violation ratio at pth quantile greater than implies that the model excessively under-predicts the realized return. In the case of a violation ratio less than , there is excessive over-prediction of the realized return by the underlying model.

The failure number follows a binomial distribution and, consequently, the appropriate likelihood ratio statistic under the null hypothesis, which the exception frequency equals to the expected one (N/T=p) is

This test can reject a model that has generated too many or too few VaR violations. As stated by Kupiec this can reject a model for both high and low failures but its power is generally poor especially for high confidence levels, it cannot indicate an inadequate model, even if the difference between the observed and the expected failure is significant.

5.2. Christoffersen’s Test

A more complete test is made by Christoffersen (1998), which jointly examines the conjecture that the total number of failures is statistically equal to the expected one and the VaR violations are independent. The main advantage of this test is that it takes into account any conditionality in the forecasts: if volatilities are low in some periods and high in others, the forecast should respond to this clustering from the distribution event. Under the null hypothesis that an expectation occurring is independent on what happened the day before and the expected proportion of violations is equal to p, the appropriate likelihood ratio is given by

where is the number of observations with value i followed by j, for i, j =0,1 and are the corresponding probabilities. i,j=1 denotes an expectation has been made, while i,j=0 indicates the opposite. If the sequence of values is independent, then the probabilities must be equal to observe, or not, a VaR violation in the next period, which can be written more formally as This test can reject a VaR model that generates either too many or too few clustered violations, but it needs several observations to become more accurate.

6. Empirical Results

6.1. Data and Preliminary Statistics

The data consist of the closing daily prices of three products such that oil (Brent), natural gas and phosphate. The collected data cover the period from January 2, 1998 through April 24, 2012 with a total of 3630 observations for oil price series and from April 2, 1998 through April 11, 2012 with 3569 observations for gas natural price series. The price of phosphate is limited to period from June 30, 1998 to April 30, 2012 due to non availability of data. This choice is justified by the importance of shocks to the price of raw materials, economic and financial occurring during this period. In addition, this period is characterized by a significant increase in volatility. The continuously compounded daily returns are defined are calculated as the difference in the logarithms of daily future prices multiplied by 100: where is the future price contract and is the level of return in percentage on day t. Evolution of prices and returns over the entire period of the study is shown in Figure 1, Figure 2 and Figure 3. The first 1600 observations are used to study the behavior of prices and model identification while the remaining observations are used to assess the quality of forecasts of the value at risk.

The descriptive statistics for the in-sample and all sample periods for the daily returns of each market are presented in Table 1. These statistics include the mean, standard deviation, maximum, minimum, Jarque- Bera statistics and Ljung- Box statistics for raw and squared returns. During the sample period, the average return is positive in the majority of indicate that returns tend to increase over time. The unconditional sample standard deviation indicates that phosphate market was the most volatile during this period. Natural gas return has the largest kurtosis which means that possibility of appearance of extreme observations in this market is higher than in others markets. As indicated by the coefficient of Skewness, oil return series presents left skewed while natural gas and phosphate return series present right skewed. The Jarque- Bera statistic indicates that daily returns are not normally distributed. On the basis of the Ljung- Box Q statistic and for raw returns series, the hypothesis that all correlation coefficients up to 12 are jointly zero is rejected in the majority of cases. Therefore, we can conclude that the return series present some linear dependence. The statistically significant serial correlations in the squared returns imply that there is non-linear dependence in the return series. Correlogramm for raw and squared returns and raw and squared residuals is presented in Figure 4 and it confirms these results. This indicates the existence of volatility clustering and a GARCH-type should be taken into account in the assessment and evaluation of VaR.

It is also important to note that returns series are inconsistent with the hypothesis that data will be independent and identically distributed to apply EVT. To overcome this issue, it is recommended to filter the original data using univariate GARCH model before applying EVT to residuals series.

Figure 1. Daily prices and returns for Brent index (period: from 02/01/1998 to 04/24/2012)
Figure 2. Daily prices and returns for Natural gas index (period: from 02/01/1998 to 04/24/2012)
Figure 3. Daily prices and returns for Phospahte index (period: from 30/06/2003 to 30/04/2012)

Table 1. Descriptive statistics for oil, Natural Gas and phosphates return series (in %)

6.2. GARCH Modeling

We consider in the next of this work the AR(1)-GARCH(1,1) specification to model return series. We assume that this model is adequate during estimation period and continue to be the more appropriate during the out of sample period. Parameter estimates were obtained by the method of quasi-maximum likelihood and the log-likelihood function of the data was constructed by assuming that innovations are conditionally distributed as Gaussian. Table 2 indicates that all parameters in mean and in variance equation are significant. Coefficient that measuring the volatility persistence is significant for the three markets indicating that periods of high (low) volatility are followed by periods of high (low) volatility. Table 3 shows the effect of filtering data in reducing autocorrelation phenomena. In the majority of case, data become no auto-correlated after AR(1)-GARCH(1,1) transformation. Figure 5 presents an estimate of the conditional volatility of natural gas returns obtained from AR (1)- GARCH (1, 1) for the in sample period (1600 first days).

Table 3. Jarque-Bera and Ljung-Box tests applied to raw and squared returns and to raw and squared residuals obtained from AR(1)-GARCH(1,1) model

Figure 4. Correlogramm for raw and squared returns and for raw and squared residuals obtained from AR(1)- GARCH (1, 1) model (phosphate)
Figure 5. Estimated conditional volatility index for Natural Gas obtained by AR (1)-GARCH (1, 1) model

We use equations (8) and (19) to calculate VaR for various confidence levels. The same procedure can be applied with EGARCH, TGARCH and GARCH with residuals assumed to follow student distribution. For FHS and conditional EVT methods, we apply traditional HS and EVT methods to residuals obtained from GARCH to calculate VaR of residuals then deduce VaR relative to original data using equation (20).

6.3. Block Method (BM) and the method of the peaks (POT)

Before estimating the parameters of the GEV distribution, we verify the existence of abnormal values. Figure 6 shows clearly that oil and natural gas returns have fat tails. Then, we analyze the data blocks representing annual maximum daily negative returns. Figures 7 gives an idea of the evolution of the maxima of blocks over time, their histogram, qq-plot and the plot of Reccord Development Index for oil returns.

The negative performance in a higher annual block is 12.85% in 2001. The QQ-plot use as reference Gumbel distribution and confirms that extremes can be modeled by a GEV distribution. The plot of record development illustrates the development of records (new maxima) for daily negative returns. It will be compared with the expected number of records obtained in the case of iid data. The number of records seems consistent with the iid behavior.

Figure 6. QQ-plot for a) oil returns b) natural gas returns
Figure 7. Evolution of maximum blocks over the years, histogram, Gumbel QQ-plot and plot development records for daily returns to daily returns of the index oil (Brent)

For the POT method, more data can be used to model the behavior of extreme values above a certain threshold level u. Choose of the threshold presents a very important challenge. If the number of observations for the series of maxima is great then some observations from the center of the distribution are introduced in the series and the index of the tail is more accurate but lower threshold increase the bias and there is less variance. However, choosing a high threshold reduces the bias but makes the estimator more volatile.

A simple graph deduced the behavior of tail loss is observed created by qq-plot using the exponential distribution as a reference distribution. If the excess of the thresholds are a thin-tailed distribution, then the GPD is exponential with ξ = 0 and qq-plot should be linear. If the QQ-plot is non-linear, this indicates either bounded queue (ξ <0) or thick tail behavior (ξ> 0).

6.4. Estimation procedure and performance evaluation of VaR methods

The choice of estimation window is an important source of model risk. The Basel regulations insist on an estimation window of at least 1 year (equivalent to around 250 trading days). We consider an estimation window of 1600 trading days for estimating parameters to capture dynamic time-varying characteristics of the data over time. Then, for all competing models, we use a rolling sample of 1600 observations in order to estimate VaR. Results of VaR estimates are presented in Table 4 for oil returns, Table 5 for gas natural returns and Table 6 for phosphate returns. Each table reports VaR estimates for the in-sample period and for April 19, 2004 as an example day. The same tables report also the mean of VaR estimates obtained during out of sample period for various confidence level. We compute violation ratio to backtest our risk measurement methods using the Kupiec’s test (1995) for unconditional coverage and the Christoffersen’s test (1998) for conditional coverage. To better analyse VaR, we consider confidence level of 95%, 96%, 97%, 98%, 99%, 99.5%, 99.7%, 99.9%.

The violation ratio is defined as the number of times that the realized return is bigger than the estimated return (number of violations) divided by the total number of forecasts. An accurate model is obtained when the expected violation ratio is equal to 1-p. At the pth quantile, the model predictions are expected to underpredict the realized return (1-p) percent of the time. A high violation ratio at the pth quantile greater than (1-p) implies that the model excessively underpredicts the realized return. In the case of a violation ratio less than 1-p, there is an excessive overprediction of the realized return by the underlying model.

We backtest our risk measurement methods using the Kupiec (1995) test for unconditional coverage and the Christoffersen (1998) test for conditional coverage. To better analyse VaR, we consider confidence level of 95%, 96%, 97%, 98%, 99%, 99.5%, 99.7%, 99.9%. A violation ratio that is excessively greater than the expected ratio implies that model signals less capital allocation and risk is not properly hedged. In this case, the model will increase the risk exposure by means of underestimating it. An excessively lower violation ratio implies that the model signals a greater capital allocation than necessary.

The relative out-of-sample performance of each method in terms of the violation ratio for the left tail (losses) at the window size of 1600 observations is calculated and presented in Table 7, Table 8 and Table 9. Number in parenthesis present the rank of each method among all competitive methods. Shaded values indicate that null hypothesis according to which violation ratio is equal to selected level p is rejected with confidence level of 95%.

For oil index, the FHS is again the worst model for quantiles lower than the 0.98th quantile. GARCH-n and unconditional EVT-POT methods FHS, GARCH-t and conditional EVT provide the best results for quantiles higher than 0.98th. Although that EGARCH and TGARCH are placed in the two first ranks at 0.95th quantile, they offer no acceptable VaR forecasts with other quantiles. None model that is statistically not overestimate nor underestimate risk for all quantiles. At the 0.995th, 0.997th and 0.999th quantiles, Conditional EVT performs the best with a violation ratio of 0.457%, 0.304 and 0.101% respectively. It is followed by FHS method. At 0.96th quantile, both Unconditional EVT-POT and GARCH –n provide the best violation ratio of 3.910% and 3.961% which amounts to 0.09% and 0.039% respectively.

Conditional and unconditional POT-EVT methods rank third with a ratio of 0.161% (0.139% over-rejection). The worst ratio is given by var–cov, EGARCH and GARCH(N) models. The GARCH(t) model provides the best performance at 0.95th, 0.997th, 0.999th quantiles, it ranks second at 0.995th and fourth at the other remainder quantiles. Unconditional EVT –POT method underestimates realised returns at all quantiles while the unconditional EVT-BM underestimates risk at all quantile except at 0.997th and 0.999th quantiles. In the other hand, FHS overestimates risk at all quantile. We can conclude that conditional POT-EVT method should be placed at first ranks only with the highest quantiles. In many cases, Unconditional EVT methods provide more accurate forecasts than conditional EVT method.

For Natural gas index, only HS method provide VaR forecast that not underestimate nor overestimate risk for all quantiles but it is not placed at first ranks. Conditional POT-EVT method provides the best violation ratio for 0.995, 0.997 and 0.999 quantiles us in the case of oil market. it underestimates risk in two cases: at 97th and 99th quantile. For the other quantiles, three methods offer the best predictive performances and occupied the first ranks: GARCH-n, TGARCH, EGARCH. At 0.97th and 0.98th quantile, EGARCH is the best while at 0.95th and 0.99th TGARCH performs better. The predictive performances of theses methods deteriorate at higher quantiles.

For phosphates index, both TGARCH and GARCH-t provide the best forecasts as they statistically not overestimate nor underestimate risk at 95% level and for all quantiles. Unconditional and Conditional EVT methods provide generally unacceptable results as in the majority of cases we reject the null hypothesis under which violation ratio is equal to selected quantile.

Table 10 presents the likelihood ratio test statistics for the conditional LRcc for the twelve methods implemented and at eight different significance levels. Our goal is to check whether an exception occurring on one day is independent of an event occurring the day before. GARCH-n and GARCH-t methods offer acceptable results in all cases followed by Conditional EVT and TGARCH methods which allow acceptable forecasts in majority of cases except. Forecasts obtained from the two Unconditional EVT methods are not acceptable according to Christoffersen test.

Figure 8 offers a visual presentation of oil negative return and the estimated VaR with some of the more performing models such as FHS, and conditional EVT. We observe that these models produce VaR forecasts that react to changing market conditions more quickly than unconditional EVT method. In the case of conditional EVT and FHS methods, variances are forecasted by an exponential model with declining weights on past observations and therefore are crucially dependent on the last few observations that are added in the sample. Conditional VaR forecasts not only increase with increasing volatility but also decrease with decreasing volatility indicates that conditional VaR estimates correspond more closely to the actual returns than the unconditional VaR estimates. Compared with GARCH and unconditional EVT methods, conditional EVT method has the advantage that they take into consideration heteroscedasticity and occurrence of extreme events simultaneously. Figure 9 and figure 10 offer also visual presentations of gas natural and phosphate negative returns respectively and VaR estimates obtained using some of the more performing models. We observe that unconditional models produce VaR forecasts that react to changing market conditions slowly. In contrast, the reaction of conditional models to changing market volatility is much quicker. Unconditional extreme value estimates are generally higher and are considerably less volatile than the GARCH models and two conditional EVT methods. The rolling samples do not generate substantial change of the data set of extreme observations and as a result the unconditional VaR estimates are almost time-independent. Unconditional EVT models are more suitable for long-run forecasts of the extreme losses rather than being a day-to-day tool to measure the market risk. Moreover, conditional VaR estimates correspond more closely to the actual returns than the unconditional VaR estimates. First method implied change of the amount of capital allowed to cover against risk each new day by adding or entrenching the amount of variation between actual and past VaR predictions. In contract, with unconditional methods the amount is relatively stable and we can wait many days before changing the amount of capital upwards or downwards.

Table 11 reports mean of Expected Shortfall of each market estimated using competitive methods and at various confidence level. when using conditional EVT method to estimate oil, natural gas and phosphate markets risk for example, means of expected shortfall at 95% level are equal to 4.831%, 8.772% and 7.637% respectively.

Table 7. Ratios of violation and Kupiec’s test for each methods and each confidence level (oil)

Table 8. Ratios of violation and Kupiec’s test for each methods and each confidence level (Natural Gas)

Table 9. Ratios of violation and Kupiec’s test for each methods and each confidence level (Phosphate)

Table 10. likelihood ratio test of conditional coverage LRCC

Figure 8. Daily oil returns and value at risk (VaR) estimates obtained with various competitive methods (confidence level: 99%)

Table 11. Expected Shortfall of each market (in mean) estimated using competitive methods

Figure 9. Daily natural gas returns and value at risk (VaR) estimates obtained with various competitive methods (confidence level: 98%)
Figure 10. Daily phosphate returns and value at risk (VaR) estimates obtained with various competitive methods (confidence level: 95%)

7. Conclusion

The purpose of this paper has been to attempt a comparative study of the predictive ability of commodities VaR estimates obtained using various estimation techniques. The main emphasis has been given to extreme value methods and to evaluate how well conditional extreme value methods perform in modelling simultaneously clustering volatility phenomena and more extreme events and in estimating and forecasting VaR measures. Three commodity markets have been investigated: oil, natural gas and phosphate. Empirical results show that conditional EVT perform better especially in the case of higher quantile but their performances deteriorate in the case of lower quantiles. GARCH models can provide VaR estimates that are better than those obtained from Unconditional and conditional EVT methods especially for lower quantiles.

There are possible directions for future research. Unconditional and Conditional copula methods may be applied to this data or to high frequency data to model dependence between markets and to measure VaR of portfolio composed by more than two commodities. Individual series can be modelled by more advanced models of the family GARCH such as regime switching models to make a better distinction between quiet periods and periods of crisis. Such methods can improve the quality of VaR’s forecasts and avoid enormous losses during periods of financial volatility.

We can follow the same methodology to estimate risk in other markets such as: metals (gold, copper, silver…) or raw materials (coffee, sugar, wheat…) markets or to estimate and measure other types of risk that an institution is exposed to, such as credit risk or operational risk, as it can be applied to problems of risk aggregation.


[1]  Artzner, P., Delbaen, F., Eber, J-M.et Heath, D. (2000) Risk Management and Capital Allocation with Coherent Measures of Risk, Available from www.math.ethz.ch/finance Brooks, C. and Persand, G. (2003) ‘Volatility forecasting for risk management’, Journal of Forecasting, Vol. 22, pp. 1-22.
In article      CrossRef
[2]  Artzner, P., Delbaen, F., Eber, J-M.et Heath, D. (1999) ‘Coherent measures of risk’, Mathematical Finance, Vol. 9, pp. 203-228.
In article      CrossRef
[3]  Assaf, A. (2009) ‘Extreme observations and risk assessment in the equity markets of MENA region: Tail measures and Value-at-Risk’, International Review of Financial Analysis, Vol. 18, pp. 109-116.
In article      CrossRef
[4]  Bekiros S, D., Georgoutsos D, A. (2005) ‘Estimation of value at risk by extreme value and conventional methods: a comparative evaluation of their predictive performance’, Journal of International Financial Markets, Institutions and Money.
In article      CrossRef
[5]  Bollerslev, T. (1986) ‘Generalized autoregressive conditional heteroskedasticity’, Journal of Econometrics, Vol. 31, pp. 307-327.
In article      CrossRef
[6]  Brodin,E., Rootzén, H. (2009) ‘Univariate and bivariate GPD methods for predicting extreme wind storm losses’, Insurance: Mathematics and Economics, Vol. 44, pp. 345-356.
In article      CrossRef
[7]  Brooks,C., Persand, G. (2003) ‘Volatility forecasting for risk management’, Journal of Forecasting, Vol. 22, pp. 1-22.
In article      CrossRef
[8]  Christoffersen, P., Diebold, F.X. (2000) ‘How relevant is volatility forecasting for financial risk management?’, Review of Economics and Statistics, Vol. 82, pp. 12-22.
In article      CrossRef
[9]  Cifter, A., (2011) ‘Value-at-Risk estimation with wavelet-based extreme value theory: Evidence from emerging markets’, Physica A: Statistical Mechanics and its Applications, Vol. 390, pp. 2356-2367.
In article      CrossRef
[10]  Emberchts, P., Resnick, S., Samorodnitsky., G. (1999) ‘Extreme value theory as a risk management tool’, North American Actuarial Journal, Vol. 26, pp. 30-41.
In article      CrossRef
[11]  Emberchts. P., Kluppelberg, C., Milkosch, T. (1997) ‘Modelling Extremal Events for Insurance and Finance’ Berlin, Springer.
In article      CrossRef
[12]  Embrechts, P., Resnick, S. and Samoorodnitsky, G. (1998) Extreme Value Theory as a Risk Management Tool, Manuscript, Depatment of Mathematics, ETH, Swiss Federal Technical University, Zurich, Switzerland.
In article      
[13]  Engle, R. (2001) ‘GARCH 101: the use of ARCH/GARCH models in applied econometrics’, Journal of Economic Perspectives, Vol. 15, No. 4, pp. 157-168.
In article      CrossRef
[14]  Engle, R.F. (1982) ‘Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom’, Econometrica, Vol. 50, pp. 987-1007.
In article      CrossRef
[15]  Ergün,T., Jongbyung, J. (2010) ‘Time-varying higher-order conditional moments and forecasting intraday VaR and Expected Shortfall’, The Quarterly Review of Economics and Finance, Vol. 50, pp. 264-272.
In article      CrossRef
[16]  Fan,Y., Zhang,Y-J., Tsai, H.T., Wei,Y-M. (2008) ‘Estimating ‘value-at-Risk’ of crude oil price and its spillover effect using the GED-GARCH approach’, Energy Economics, Vol. 30, pp. 3156-3171.
In article      CrossRef
[17]  Fernandez,V. (2005) ‘Risk management under extreme events’, International Review of Financial Analysis, Vol. 14, pp. 113-148.
In article      CrossRef
[18]  Fisher, R., Tippet, L. (1928) ‘Limiting forms of the frequency distribution of the largest or smallest member of a sample’, Proceedings of the Cambridge Philosophical Society, pp. 180-190.
In article      
[19]  Gençay, R., Selçuk, F. and Ulugulyagci, A. (2003) ‘High volatility, thick tails and extreme value theory’, Insurance: Mathematics and Economics, Vol. 33, pp. 337-356.
In article      CrossRef
[20]  Ghorbel A., et Trabels A. (2009) “Measure of financial risk using conditional extreme value copulas with EVT margins” Journal of Risk Vol. 11, n°4 pp: 51-85
In article      
[21]  Glosten, L.R., Jagannathan, R. and Runkle, D.E. (1993) ‘On the relation between the expected value and the volatility of the nominal excess return on stocks’, Journal of Finance, Vol. 48, pp. 1779-1801.
In article      CrossRef
[22]  Halbleib, R., Winfried Pohlmeier,W. (2012) ‘Improving the value-at-Risk’, Journal of Economic Dynamics and Control, Vol. 36, pp. 1212-1228.
In article      CrossRef
[23]  Hang Chan, N., Deng, S., Peng, L., Xia, Z. (2007) ‘Interval estimation of value-at-Risk based on GARCH models with heavy-tailed innovations’, Journal of Econometrics, Vol. 137, pp 556-576.
In article      CrossRef
[24]  Huang, W., Liu Q., Ghon Rhee S., Feng W. (2012) ‘Extreme downside risk and expected stock returns ’, Journal of Banking & Finance, Vol. 36 pp 1492-1502.
In article      CrossRef
[25]  Jenkinson, A.F. (1955) ‘The frequency distribution of the annual maximum (or minimum) values of meteorological elements’, Quarterly Journal of the Royal Meteorological Society, Vol. 81, pp. 145-158.
In article      CrossRef
[26]  Krehbiel, T., Adkinsl, C. (2005) ‘Price risk in the NYMEX energy complex: An extreme value approach’, Journal of Futures Markets, vol. 25, p. 309-337.
In article      CrossRef
[27]  Kupiec, P. (1995) ‘Techniques for Verifying the Accuracy of Risk Management Models’, Journal of Derivatives, vol. 3, pp. 73-84.
In article      CrossRef
[28]  Ledoit, O., Santa-Clara, P. and Wolf, M. (2003) ‘Flexible multivariate GARCH modeling with an application to international stock markets’, The Review of Economics and Statistics, Vol. 85, pp. 735-747.
In article      CrossRef
[29]  Longin, F.M. (2000) ‘From value at risk to stress testing: the extreme value approach’, Journal of Banking and Finance, Vol. 24, pp. 1097-1130.
In article      CrossRef
[30]  Marimoutou,V., Raggad, B., Trabelsi,A. (2009) ‘Extreme Value Theory and Value at Risk: Application to the oil market’, International Review of Financial Analysis, Vol. 13, pp. 133-152.
In article      
[31]  McNeil, A. (1998) Calculating Quantile Risk Measures for Financial Return Series using Extreme Value Theory, Working Paper, ETH Zürich, Switzerland.
In article      
[32]  McNeil, A. (1999) Extreme Value Theory for Risk Managers. In Internal Modelling and CAD II, Risk Waters Group, London, pp. 93-113.
In article      
[33]  McNeil, A. and Frey, R. (2000) ‘Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach’, Journal of Empirical Finance, Vol. 7, pp. 271-300.
In article      CrossRef
[34]  Frey, R., McNeil, A., (2002) “VaR and expected shortfall in portfolios of dependent credit risks: Conceptual and practical insights” Journal of Banking & Finance, Vol. 26, N° 7, Pages 1317-1334.
In article      CrossRef
[35]  Mûller, U.A., Dacorogna, M.M. and Pictet, O.V. (1998) ‘Heavy tails in high-frequency financial data’, in Feldman, R.E. and Taqqu, M.S. (Eds.): A Practical Guide to Heavy Tails: Statistical Techniques and Applications, Birkhauser, Boston, pp. 55-77.
In article      
[36]  Nelson, D.B. (1991) ‘Conditional heteroscedasticity in asset returns: a new approach’, Econometrica, Vol. 59, pp. 347-370.
In article      CrossRef
[37]  Nomikos, N., Pouliasis, P., (2011) ‘Forecasting petroleum futures markets volatility: The role of regimes and market conditions’, Economics, Vol. 33, pp. 321-337.
In article      
[38]  Ourir,A., Snoussi,W. (2012) ‘Markets liquidity risk under extremal dependence: Analysis with VaRs methods’, Economic Modelling, Vol. 29, pp. 1830-1836.
In article      CrossRef
[39]  Reiss., R., Thomas., M. (2001) ‘Statistical Analysis of Extreme Value with Applications to Assurance, Finance’ Hydrology and Other Fields, Basel, Birkhauser, Verlag. Sample’, Proceedings of the Cambridge Philosophical Society, Vol. 24, pp. 180-190.
In article      
[40]  Bekiros, S., Georgoutsos, D. (2008) ‘The extreme-value dependence of Asia-Pacific equity markets’, Journal of Multinational Financial Management, Vol. 18, pp. 197-208.
In article      CrossRef
[41]  Von Mises, R. (1936) ‘La distribution de la plus grandede nvaleurs’, Rev., Math, Union Interbalcanique, Vol. 1, pp. 141-160, Reproduced, Selected papers of von Mises, R. (1964) American Mathematical Society, Vol. 2, pp. 271-294.
In article      
[42]  Wang, Z-R., Chen, X-H., Jin,Y-B., Zhou, Y-J. (2010) ’Estimating Risk of foreign exchange portfolio: Using VaR and CVaR based on GARCH-EVT-Copula model’, Physica A: Statistical Mechanics and its Applications, Vol. 389, pp. 4918-4928.
In article      CrossRef
[43]  Yamai, Y., Yoshiba, T. (2005) ‘Value at Risk versus expected shortfall: a practical perspective’, Journal of Banking and Finance, Vol. 29, pp 997-1015.
In article      CrossRef
[44]  Yi Hou Huang, A. (2010) ‘An optimization process in Value-at-Risk estimation’ Review of Financial Economics, Vol. 19, pp. 109-116.
In article      CrossRef
[45]  Zhang, Z., Shinki, K. (2007) ’Extreme co-movements and extreme impacts in high frequency data in finance’, Journal of Banking & Finance, Vol. 31, pp. 1399-1415.
In article      CrossRef
[46]  Zhao, X., Scarrott, C-J., Marco Reale, M. (2010) ‘GARCH dependence in extreme value models with Bayesian inference’, Mathematics and Computers in Simulation, Vol. 81, pp. 1430-1440.
In article      CrossRef


1SeeAcerbi,C.,Tasche,D.(2002)for more detail

2Mean excess function (MEF) and hill plot two tools that are used to threshold determination. For a detailed discussion and several examples of the hill-plot, see Embrechts et al. (1997).

  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn