Monthly Archives: July 2013

On Abenomics

A bit of history…
Japan had enjoyed a tremendous economic boom in the 1980s which came to an end in 1989, with the Japanese CB sharply raising interest rates and bursting the real estate and stock market bubble, thus marking the start of ‘The Lost Decade’. In response to the subsequent recession, Japan initiated a public works programme, among other measures, which saw economic growth rise to an average of 1.3% from 1992 to 1997. When austerity measures were introduced in 1998, economic growth fell to -2.1% (1998) and -0.1% (1999). Japan then tried its hand at Quantitative Easing in 2001 ( with interest rates already near 0%), increasing the balance sheet of the central bank from 5 trillion Yen to 35 trillion Yen over a 4-year period, but with little effect on deflation and economic growth. With the financial crisis of 2008 came economic growth of -1.2% in 2008 and -6.3% in 2009. Growth returned to ~1% in the following years, but this time around Japan was joined by the rest of the developed world.

Abenomics and its immediate effects
Enter Abenomics. With $600 Billion to $700 Billion pumped into the economy annually, Japan’s economy is now growing faster than any of the G-7, with GDP growth in the first 3 months of 2013 at 4.1% annualized. The Tokyo stock market has gained 40% this year and the inflation target is set at 2% in 2 years time.

However, aggressive monetary policy in Japan will have ripple effects in its trade partners, as the Yen is devalued, imports are reduced and exports increased. There remains the possibility of small-scale currency wars, or diplomatic scuffles at the least. The actions taken in response to the Yen devaluation by Japanese trade partners will be instructive in determining the overall effect of Abenomics on the Japanese trade balance.

Japanese nationalism
It is important to keep in mind the broader narrative of a resurgent Japanese nationalism when talking about trade wars and currency devaluation. Prime Minister Abe has also been signalling increased defence spending, changes to the constitution -which is seen as a Western handover- and increased aggressiveness in territorial disputes with China. Japan has been obsessive historically about its position in the comity of nations. The economic malaise of the past couple of decades has been a heavy weight on the Japanese national consciousness, particular with a dynamic Chinese economy at its border. Japan needed to make aggressive moves, if not for the sake of the economy, then for assuaging public feelings of a permanent Japanese demise. However, the broader nationalistic push leaves less room for accommodating foreign concerns regarding Japanese economic policy and makes it less likely that Yen devaluation will be (relatively) quietly accepted by Japanese trading partners, particularly in Asia.

Overall, I’m pretty excited to see how Abenomics unfolds in the coming years. Its results should be instructive to central banks and governments worldwide.

Leave a comment

Filed under Economics

Financial Crisis reading list

Philip Swagel from the University of Maryland has put up this excellent article on his blog at the New York Times. It is a list (with explanations) of some of the important books to come out of the financial crisis. If you’re looking for some good summertime reading, head on over to http://economix.blogs.nytimes.com/2013/07/15/financial-crisis-reading-list-2/

The blog posts of economists like Krugman, DeLong and Cowen as the crisis unfolded, and their subsequent analyses are also an important educational resource. You can find the blogs here:

1-http://marginalrevolution.com/

2-http://krugman.blogs.nytimes.com/

3-http://delong.typepad.com/

Leave a comment

Filed under Finance

R-ratio vs mean-variance optimization

I am looking at the following tickers from 2005-01-01 to 2008-01-01. The tickers are GE, F, MSFT, DELL, INTC. I will find the mean-variance weights and the R-ratio weights and then test portfolio performance from 2008-01-01 to 2010-01-01. This is a pretty limited test as the portfolio weights will be static over the test period, which is very unrealistic. But it still will give us a general idea of the relative performance of each optimization technique in case of extreme events (2008 crisis in this case).

Using the procedures I went over in my previous posts I obtained the following weights for each technique:

Mean-Variance: GE-19.8%   F-0%   MSFT-70.4%  DELL-0%   INTC-9.8%

R-ratio: GE-80.3%   F-0.7%   MSFT-14%   DELL-0%   INTC-5%

These are the results for the mean-variance portfolio from 2005-01-01 to 2008-01-01

Cumulative Return: 0.25507335    Annual Return: 0.07888944

Annualized Sharpe Ratio: 0.47503374  Win %: 0.51128818

Annualized Volatility: 0.16607124     Maximum Drawdown: -0.21479303

Max Length Drawdown: 236.00000000
ImageAnd this is how the mean-variance portfolio did from 2008-01-01 to 2010-01-01:

Cumulative Return: -0.3332456    Annual Return: -0.1831219

Annualized Sharpe Ratio: -0.4551308  Win %: 0.5039683

Annualized Volatility: 0.4023501   Maximum Drawdown: -0.6618125

Max Length Drawdown: 504.0000000

Image

Now, moving on to returns from the r-ratio portfolio from 2005-01-01 to 2008-01-01:

Cumulative Return: 0.10471265    Annual Return: 0.03384322

Annualized Sharpe Ratio: 0.23811103 Win %: 0.50464807

Annualized Volatility: 0.14213208   Maximum Drawdown: -0.13000366

Max Length Drawdown: 345.00000000

Image

And from 2008-01-01 to 2010-01-01

Cumulative Return: -0.5929209 Annual Return: -0.3614045

Annualized Sharpe Ratio: -0.7272896  Win %: 0.4920635

Annualized Volatility: 0.4969197   Maximum Drawdown: -0.8058170

Max Length Drawdown: 444.0000000

Image

Very interesting results. The R-ratio portfolio performed substantially worse than the minimum-variance portfolio, contrary to my original hypothesis. This may well be due to the very small size of the portfolio we were testing, leaving it vulnerable to idiosyncratic risk. For more robust results, I will need to run these tests on portfolios of much greater size to wash out idiosyncratic risk. I will need to conduct this test on various portfolios of a fixed size (number of securities and market cap) to get a broader understanding of the variation in performance between the two optimization techniques.

It could also be that the distribution of the securities changed, rendering the CVaR values used to calculate the R-ratio wrong. In fact, we would expect the distributions to change, given the magnitude of the events which occurred in 2008 (and how far out in the tails they were considered to be).

Leave a comment

Filed under Finance, Portfolio Optimization

Rachev-ratio portfolio optimization using Differential Evolution

What is Differential Evolution (DE)?

Differential Evolution is an optimization technique inspired from biology which uses evolution and mutation of candidate solutions to reach global optima (or get close to global optima), over the course of successive generations of solutions. DE does not require the function we seek to optimize to be continuous and so presents an improvement over optimization methods such as gradient-descent.  Therefore it is useful for things like portfolio-optimization where real world applications require multiple constraints and the functions to optimize may often be discontinuous and non-linear. The following abstract from this paper by Krink and Paterlini may explain things best:

Realistic portfolio optimization, in contrast to simplistic mean-variance optimization, is a challenging problem, because it requires to determine a set of optimal solutions with respect to multiple objectives, where the objective functions are often multimodal and non-smooth. Moreover, the objectives are subject to various constraints of which many are typically non-linear and discontinuous. Conventional optimization methods, such as quadratic programming, cannot cope with these realistic problem properties. A valuable alternative are stochastic search heuristics, such as simulated annealing or evolutionary algorithms.

Following on from the previous post, where the paper presented found that optimizing the R-ratio results in superior returns as compared to mean-variance optimization, let’s use DE optimization in R to get portfolio weights for an R-ratio optimized portfolio.

What is the Rachev ratio?

Similar to how the Sharpe ratio is a measure of excess return (expected return over the risk-free rate) per unit of risk (standard deviation), the R-ratio is a measure of return (given by the Expected Tail Return) per unit of risk (given by the Expected Tail Loss).

For a 95% confidence level the Expected Tail Return (ETR) is the average of the right 5% of the distribution of returns and the Expected Tail Loss (ETL) is the average of the left 5% of the distribution of returns, over a given period of time. Read this paper for more information on the VaR, which should help you better understand ETL and ETR.

My understanding of the R-ratio is that it is measuring the risk-return characteristic of large gains compared to large losses (alternatively, tail return per unit tail risk).

So with our portfolio, we are seeking to maximize the possibility of large tail returns and minimize the possibility of large tail losses, or maximize the R-ratio if it is positive and minimize it if it’s negative.

R implementation using DE:

Using DE for this is a bit of an overkill, but it’s good practise for when we need to create portfolios with several other constraints. I used code from these excellent slides posted by Guy Yollin on portfolio optimization.

1-First, we need to get and load the DEoptim package
install.packages(‘DEoptim’)
library(‘DEoptim’)

The function DEoptim() requires that you pass it an objective function to minimize, an upper bound for the parameter values and a lower bound for the parameter values which the objective function works with, and which DEoptim() will optimize.

2-Now we need to write the objective function. The code for this I obtained from the slides posted above.

optRR.gt3=function(w,ret){
retu=ret%*%w

Created by Pretty R at inside-R.org

The parameter ‘w’ is the vector of weights which will be optimized. ‘ret’ is the matrix of log returns for each security to be considered in the portfolio. ‘retu’ is the vector of portfolio returns, given the matrix of security returns, ‘ret’, and the weight on each security

obj= -CVaR(as.ts(-retu))/CVaR(as.ts(retu))
obj=ifelse(obj>0,-obj,obj)

The variable ‘obj’ above is a calculation of the Rachev ratio. ‘retu’ has to be converted into a time series again (hence the as.ts() function)  as multiplying the individual security returns by portfolio weights to obtain portfolio returns messes up the time series information. The second line of code above checks whether the R-ratio obtained is negative or positive, as DEoptim() will minimize the objective function. If it is positive, the ifelse() turns it negative so that the absolute value of the R-ratio will be maximized when DEoptim() minimizes the objective function.

weight.penalty = 100*(1-sum(w))^2
small.weight.penalty=100*sum(x[x<0.03])^2
return(obj+weight.penalty +small.weight.penalty)

The first line of code above adds a penalty to the objective function if the sum of the portfolio weights exceeds or falls below 1. The second line adds a penalty if the weight on any single security falls below 3%. Finally, the last line returns the R-ratio with the penalties added.

3-Now all we need to do is call DEoptim() with the right parameters.
res=DEoptim(optRR.gt3,lower=c(0,0,0,0),upper=c(1,1,1,1),ret=retMat)

The first argument above is the objective function itself, the second argument is a vector specifying the lower bounds for each of the security weights, the third argument is similarly for the upper bounds and the fourth argument is the return matrix of log normal returns for four securities (which I covered in previous posts). We type this in, hit enter, and wait for our solutions to evolve. I ran this eight times and got the following results:

Iteration: 200 bestvalit: -1.085795 bestmemit: 0.580635 0.000007 0.001226 0.418169
Iteration: 200 bestvalit: -1.085815 bestmemit: 0.580927 0.000006 0.000118 0.419206
Iteration: 200 bestvalit: -1.085842 bestmemit: 0.586597 0.000002 0.000003 0.413373
Iteration: 200 bestvalit: -1.085801 bestmemit: 0.584547 0.000038 0.000201 0.415401
Iteration: 200 bestvalit: -1.085830 bestmemit: 0.587022 0.000003 0.000412 0.412491
Iteration: 200 bestvalit: -1.085809 bestmemit: 0.591487 0.000013 0.000058 0.408194
Iteration: 200 bestvalit: -1.085808 bestmemit: 0.587920 0.000012 0.000090 0.411523
Iteration: 200 bestvalit: -1.085827 bestmemit: 0.588024 0.000002 0.000445 0.411674

The weights stay fairly consistent each time the function is run. You can vary the number of iterations in each function call to get even greater consistency. The weights provided are in order of their listing in the returns matrix passed to the DEoptim() function. In the next post, I will compare the performance of this portfolio with that of the minimum-variance portfolio across different time periods. Should be interesting to see how they perform, especially over the 2008 crisis.

Leave a comment

Filed under Finance, Portfolio Optimization

How to define and measure risk

I came across this really interesting paper on how to define and measure risk. You can find it here, but I also summarized it below, and wrote down some excerpts which I thought stood out. I bolded some text for emphasis and stuff in [ ] brackets is some notes I took while reading.

Holton (2004) proposes that a definition of risk has to take into account two essential components of observed phenomena: exposure and uncertainty. Moreover, all the admissible tools available to an investor to cope with risk can model only the risk that is perceived.”

Attempts to quantify risk have led to the notion of a risk measure. A risk measure is a functional that assigns a numerical value to a random variable which is interpreted as a loss. Since risk is subjective because it is related to an investor’s perception of exposure and uncertainty, risk measures are strongly related to utility functions.”

In portfolio theory, a risk measure has always been valued principally because of its capacity of ordering investor preferences.”

…minimizing the probability of being below a benchmark is equivalent to maximizing an expected state dependent utility function (see Castagnoli and LiCalzi (1996, 1999)).”

[Multiple objectives and multiple benchmarks make risk a multi-dminesional phenomenon]

… risk is an asymmetric concept related to downside outcomes, and any realistic way of measuring risk should consider upside and downside potential outcomes differently. Furthermore, a measure of uncertainty is not necessarily adequate in measuring risk. The standard deviation considers both positive and negative deviations from the mean as a potential risk. Thus, in this case, out-performance relative to the mean is penalized just as much as under-performance.”

“Expected tail loss (ETL), an example of a coherent risk measure, is also know as the Conditional Variance at Risk (CVaR), if we assume a continuous security returns distribution. ETL can be interpreted as the average loss beyond VaR.

[Alternatively, if a returns distribution is estimated with 95% confidence (including the right/positive tail), the CVaR is the mean of the remaining 5% in the left/negative tail.]

Clearly, if the degree of uncertainty changes over time, the risk too has to changed over time. In this case, the investment return process is not stationary; that is, we cannot assume that returns maintain their distribution unvaried in the course of time.”

Under the assumption of stationary and independent realizations, the oldest observations have the same influence on our decisions as the most recent ones. Is this assumption realistic? Recent studies on investment return processes have shown that historical realizations are not independent and exhibit autoregressive behavior. Consequently, we observe the clustering of volatility effect; that is, each observation influences subsequent ones.”

[Cointegeration is when two price series display a consistent spread across time. It is different from correlation as, in correlation, the direction of movements may be the same, but the magnitude may vary. With cointegeration present, if the magnitude of price movements changes the spread, the spread will show mean-reversion. Price series may be both stochastic and show cointegeration. A pair of price series showing cointegration is called a stationary pair]

The ex-ante analysis clearly indicates that the minimum variance portfolios (portfolios 1 and 3) present a lower dispersion (standard deviation) and a higher risk of big losses (VaR and ETL) than portfolios that maximize the R-ratio given by (12) (respectively portfolios 2 and 4). Thus the ex-ante analysis suggests that the more conservative minimum variance portfolios (portfolios 1 and 3) not always take into account the possibility of big losses.”

In particular, on 5/31/2004 the final wealth of the three different strategies based on R-, Sharpe and STARR ratios is respectively 1.76, 1.07, and 0.91. Therefore, as we expected, we obtain that the strategy based on the maximization of the STARR ratio provides the most conservative behavior while the strategy based on the R-ratio permits to increase the final wealth much more than the others.”

…most investors perceive a low probability of a large loss to be far more risky than a high probability of a small loss. Therefore, investors perceive risk to be non-linear.”

Leave a comment

July 2, 2013 · 1:48 pm