# Category Archives: Finance

Philip Swagel from the University of Maryland has put up this excellent article on his blog at the New York Times. It is a list (with explanations) of some of the important books to come out of the financial crisis. If you’re looking for some good summertime reading, head on over to http://economix.blogs.nytimes.com/2013/07/15/financial-crisis-reading-list-2/

The blog posts of economists like Krugman, DeLong and Cowen as the crisis unfolded, and their subsequent analyses are also an important educational resource. You can find the blogs here:

Filed under Finance

## R-ratio vs mean-variance optimization

I am looking at the following tickers from 2005-01-01 to 2008-01-01. The tickers are GE, F, MSFT, DELL, INTC. I will find the mean-variance weights and the R-ratio weights and then test portfolio performance from 2008-01-01 to 2010-01-01. This is a pretty limited test as the portfolio weights will be static over the test period, which is very unrealistic. But it still will give us a general idea of the relative performance of each optimization technique in case of extreme events (2008 crisis in this case).

Using the procedures I went over in my previous posts I obtained the following weights for each technique:

Mean-Variance: GE-19.8%   F-0%   MSFT-70.4%  DELL-0%   INTC-9.8%

R-ratio: GE-80.3%   F-0.7%   MSFT-14%   DELL-0%   INTC-5%

These are the results for the mean-variance portfolio from 2005-01-01 to 2008-01-01

Cumulative Return: 0.25507335    Annual Return: 0.07888944

Annualized Sharpe Ratio: 0.47503374  Win %: 0.51128818

Annualized Volatility: 0.16607124     Maximum Drawdown: -0.21479303

Max Length Drawdown: 236.00000000 And this is how the mean-variance portfolio did from 2008-01-01 to 2010-01-01:

Cumulative Return: -0.3332456    Annual Return: -0.1831219

Annualized Sharpe Ratio: -0.4551308  Win %: 0.5039683

Annualized Volatility: 0.4023501   Maximum Drawdown: -0.6618125

Max Length Drawdown: 504.0000000 Now, moving on to returns from the r-ratio portfolio from 2005-01-01 to 2008-01-01:

Cumulative Return: 0.10471265    Annual Return: 0.03384322

Annualized Sharpe Ratio: 0.23811103 Win %: 0.50464807

Annualized Volatility: 0.14213208   Maximum Drawdown: -0.13000366

Max Length Drawdown: 345.00000000 And from 2008-01-01 to 2010-01-01

Cumulative Return: -0.5929209 Annual Return: -0.3614045

Annualized Sharpe Ratio: -0.7272896  Win %: 0.4920635

Annualized Volatility: 0.4969197   Maximum Drawdown: -0.8058170

Max Length Drawdown: 444.0000000 Very interesting results. The R-ratio portfolio performed substantially worse than the minimum-variance portfolio, contrary to my original hypothesis. This may well be due to the very small size of the portfolio we were testing, leaving it vulnerable to idiosyncratic risk. For more robust results, I will need to run these tests on portfolios of much greater size to wash out idiosyncratic risk. I will need to conduct this test on various portfolios of a fixed size (number of securities and market cap) to get a broader understanding of the variation in performance between the two optimization techniques.

It could also be that the distribution of the securities changed, rendering the CVaR values used to calculate the R-ratio wrong. In fact, we would expect the distributions to change, given the magnitude of the events which occurred in 2008 (and how far out in the tails they were considered to be).

Filed under Finance, Portfolio Optimization

## Rachev-ratio portfolio optimization using Differential Evolution

What is Differential Evolution (DE)?

Differential Evolution is an optimization technique inspired from biology which uses evolution and mutation of candidate solutions to reach global optima (or get close to global optima), over the course of successive generations of solutions. DE does not require the function we seek to optimize to be continuous and so presents an improvement over optimization methods such as gradient-descent.  Therefore it is useful for things like portfolio-optimization where real world applications require multiple constraints and the functions to optimize may often be discontinuous and non-linear. The following abstract from this paper by Krink and Paterlini may explain things best:

Realistic portfolio optimization, in contrast to simplistic mean-variance optimization, is a challenging problem, because it requires to determine a set of optimal solutions with respect to multiple objectives, where the objective functions are often multimodal and non-smooth. Moreover, the objectives are subject to various constraints of which many are typically non-linear and discontinuous. Conventional optimization methods, such as quadratic programming, cannot cope with these realistic problem properties. A valuable alternative are stochastic search heuristics, such as simulated annealing or evolutionary algorithms.

Following on from the previous post, where the paper presented found that optimizing the R-ratio results in superior returns as compared to mean-variance optimization, let’s use DE optimization in R to get portfolio weights for an R-ratio optimized portfolio.

What is the Rachev ratio?

Similar to how the Sharpe ratio is a measure of excess return (expected return over the risk-free rate) per unit of risk (standard deviation), the R-ratio is a measure of return (given by the Expected Tail Return) per unit of risk (given by the Expected Tail Loss).

For a 95% confidence level the Expected Tail Return (ETR) is the average of the right 5% of the distribution of returns and the Expected Tail Loss (ETL) is the average of the left 5% of the distribution of returns, over a given period of time. Read this paper for more information on the VaR, which should help you better understand ETL and ETR.

My understanding of the R-ratio is that it is measuring the risk-return characteristic of large gains compared to large losses (alternatively, tail return per unit tail risk).

So with our portfolio, we are seeking to maximize the possibility of large tail returns and minimize the possibility of large tail losses, or maximize the R-ratio if it is positive and minimize it if it’s negative.

R implementation using DE:

Using DE for this is a bit of an overkill, but it’s good practise for when we need to create portfolios with several other constraints. I used code from these excellent slides posted by Guy Yollin on portfolio optimization.

1-First, we need to get and load the DEoptim package
install.packages(‘DEoptim’)
library(‘DEoptim’)

The function DEoptim() requires that you pass it an objective function to minimize, an upper bound for the parameter values and a lower bound for the parameter values which the objective function works with, and which DEoptim() will optimize.

2-Now we need to write the objective function. The code for this I obtained from the slides posted above.

```optRR.gt3=function(w,ret){
retu=ret%*%w```

Created by Pretty R at inside-R.org

The parameter ‘w’ is the vector of weights which will be optimized. ‘ret’ is the matrix of log returns for each security to be considered in the portfolio. ‘retu’ is the vector of portfolio returns, given the matrix of security returns, ‘ret’, and the weight on each security

```obj= -CVaR(as.ts(-retu))/CVaR(as.ts(retu))
obj=ifelse(obj>0,-obj,obj)
```

The variable ‘obj’ above is a calculation of the Rachev ratio. ‘retu’ has to be converted into a time series again (hence the as.ts() function)  as multiplying the individual security returns by portfolio weights to obtain portfolio returns messes up the time series information. The second line of code above checks whether the R-ratio obtained is negative or positive, as DEoptim() will minimize the objective function. If it is positive, the ifelse() turns it negative so that the absolute value of the R-ratio will be maximized when DEoptim() minimizes the objective function.

```weight.penalty = 100*(1-sum(w))^2
small.weight.penalty=100*sum(x[x<0.03])^2
return(obj+weight.penalty +small.weight.penalty)```

The first line of code above adds a penalty to the objective function if the sum of the portfolio weights exceeds or falls below 1. The second line adds a penalty if the weight on any single security falls below 3%. Finally, the last line returns the R-ratio with the penalties added.

3-Now all we need to do is call DEoptim() with the right parameters.
res=DEoptim(optRR.gt3,lower=c(0,0,0,0),upper=c(1,1,1,1),ret=retMat)

The first argument above is the objective function itself, the second argument is a vector specifying the lower bounds for each of the security weights, the third argument is similarly for the upper bounds and the fourth argument is the return matrix of log normal returns for four securities (which I covered in previous posts). We type this in, hit enter, and wait for our solutions to evolve. I ran this eight times and got the following results:

Iteration: 200 bestvalit: -1.085795 bestmemit: 0.580635 0.000007 0.001226 0.418169
Iteration: 200 bestvalit: -1.085815 bestmemit: 0.580927 0.000006 0.000118 0.419206
Iteration: 200 bestvalit: -1.085842 bestmemit: 0.586597 0.000002 0.000003 0.413373
Iteration: 200 bestvalit: -1.085801 bestmemit: 0.584547 0.000038 0.000201 0.415401
Iteration: 200 bestvalit: -1.085830 bestmemit: 0.587022 0.000003 0.000412 0.412491
Iteration: 200 bestvalit: -1.085809 bestmemit: 0.591487 0.000013 0.000058 0.408194
Iteration: 200 bestvalit: -1.085808 bestmemit: 0.587920 0.000012 0.000090 0.411523
Iteration: 200 bestvalit: -1.085827 bestmemit: 0.588024 0.000002 0.000445 0.411674

The weights stay fairly consistent each time the function is run. You can vary the number of iterations in each function call to get even greater consistency. The weights provided are in order of their listing in the returns matrix passed to the DEoptim() function. In the next post, I will compare the performance of this portfolio with that of the minimum-variance portfolio across different time periods. Should be interesting to see how they perform, especially over the 2008 crisis.

## How to define and measure risk

I came across this really interesting paper on how to define and measure risk. You can find it here, but I also summarized it below, and wrote down some excerpts which I thought stood out. I bolded some text for emphasis and stuff in [ ] brackets is some notes I took while reading.

Holton (2004) proposes that a definition of risk has to take into account two essential components of observed phenomena: exposure and uncertainty. Moreover, all the admissible tools available to an investor to cope with risk can model only the risk that is perceived.”

Attempts to quantify risk have led to the notion of a risk measure. A risk measure is a functional that assigns a numerical value to a random variable which is interpreted as a loss. Since risk is subjective because it is related to an investor’s perception of exposure and uncertainty, risk measures are strongly related to utility functions.”

In portfolio theory, a risk measure has always been valued principally because of its capacity of ordering investor preferences.”

…minimizing the probability of being below a benchmark is equivalent to maximizing an expected state dependent utility function (see Castagnoli and LiCalzi (1996, 1999)).”

[Multiple objectives and multiple benchmarks make risk a multi-dminesional phenomenon]

… risk is an asymmetric concept related to downside outcomes, and any realistic way of measuring risk should consider upside and downside potential outcomes differently. Furthermore, a measure of uncertainty is not necessarily adequate in measuring risk. The standard deviation considers both positive and negative deviations from the mean as a potential risk. Thus, in this case, out-performance relative to the mean is penalized just as much as under-performance.”

“Expected tail loss (ETL), an example of a coherent risk measure, is also know as the Conditional Variance at Risk (CVaR), if we assume a continuous security returns distribution. ETL can be interpreted as the average loss beyond VaR.

[Alternatively, if a returns distribution is estimated with 95% confidence (including the right/positive tail), the CVaR is the mean of the remaining 5% in the left/negative tail.]

Clearly, if the degree of uncertainty changes over time, the risk too has to changed over time. In this case, the investment return process is not stationary; that is, we cannot assume that returns maintain their distribution unvaried in the course of time.”

Under the assumption of stationary and independent realizations, the oldest observations have the same influence on our decisions as the most recent ones. Is this assumption realistic? Recent studies on investment return processes have shown that historical realizations are not independent and exhibit autoregressive behavior. Consequently, we observe the clustering of volatility effect; that is, each observation influences subsequent ones.”

[Cointegeration is when two price series display a consistent spread across time. It is different from correlation as, in correlation, the direction of movements may be the same, but the magnitude may vary. With cointegeration present, if the magnitude of price movements changes the spread, the spread will show mean-reversion. Price series may be both stochastic and show cointegeration. A pair of price series showing cointegration is called a stationary pair]

The ex-ante analysis clearly indicates that the minimum variance portfolios (portfolios 1 and 3) present a lower dispersion (standard deviation) and a higher risk of big losses (VaR and ETL) than portfolios that maximize the R-ratio given by (12) (respectively portfolios 2 and 4). Thus the ex-ante analysis suggests that the more conservative minimum variance portfolios (portfolios 1 and 3) not always take into account the possibility of big losses.”

In particular, on 5/31/2004 the final wealth of the three different strategies based on R-, Sharpe and STARR ratios is respectively 1.76, 1.07, and 0.91. Therefore, as we expected, we obtain that the strategy based on the maximization of the STARR ratio provides the most conservative behavior while the strategy based on the R-ratio permits to increase the final wealth much more than the others.”

…most investors perceive a low probability of a large loss to be far more risky than a high probability of a small loss. Therefore, investors perceive risk to be non-linear.”

July 2, 2013 · 1:48 pm

## Graphing with fPortfolio

Now to making pretty-looking graphs and charts for portfolio optimization! The first thing we will do is determine the frontier for our combination of securities. Remember, the variable returnsMatrix below is a matrix of returns for all the securities in your portfolio.

This gives us the frontier. If you type in ?frontierPlot and read through, you will find out all the interesting plots you can make.
frontier=portfolioFrontier(as.timeSeries(returnsMatrix))

We can plot this by:
frontierPlot(frontier)
grid() The circles in dark mark the efficient frontier and the grid() function just makes it look nicer. We can now add to this plot by doing the following:

minvariancePoints(frontier,col=’red’,pch=20)
cmlPoints(frontier,col=’blue’,pch=20)
tangencyPoints(frontier,col=’yellow’,pch=4) This added the minimum variance point, the capital market point and the tangency point. The tangency point is marked with an ‘x’ in yellow and lies in exactly the same location as the capital market point in blue. We can pile on even more stuff:

tangencyLines(frontier,col=’blue’)
sharpeRatioLines(frontier,col=’orange’,lwd=2)
twoAssetsLines(frontier,col=’green’,lwd=2)
singleAssetPoints(frontier,col=’black’,pch=20) So we now have the tangency line in blue, the Sharpe ratio line in orange, some of the visible asset points in black and the efficient frontiers for all possible combinations of two assets in our portfolio. Looking at this we can kind of see how the assets contribute to the portfolio efficient frontier, and why some assets are highly weighted while others are weighted at 0. Another very interesting chart is obtained by:

weightsPlot(frontier) This displays the weights on the different securities, the risk, and the return along the frontier. The black line through the chart indicates the minimum variance portfolio. Let’s create some graphs for the tangency portfolio:

tgPort=tangencyPortfolio(returnsMatrix)
weightsPie(tgPort) This gives a pie-chart of the weights on the securities in the tangency portfolio.

weightedReturnsPie(tgPort) This gives a pie-chart of the weighted returns of the tangency portfolio.

Filed under Finance, Portfolio Optimization

## Portfolio Optimization with fPortfolio

fPortfolio contains a number of function to make portfolio optimization easier. I can compare the results I get from the functions in fPortfolio to the results from my function from the previous post. I don’t expect them to be exactly the same, but they should be broadly similar.

First, install and load the package:
install.packages(‘fPortfolio’)
library(‘fPortfolio’)

Next, you need to build a returns matrix for the securities you are interested in. You can create return vectors for the different tickers (using methods from an earlier post) and then combine them together using cbind(). The function I wrote in the previous post also returns a matrix of security returns, so you can just use that code as well.

This is the function for the tangency or (highest Sharpe ratio) portfolio:
tangencyPortfolio(as.timeSeries(matrix),constraints=’maxW[1:9]=0.2′)

Here I set the same constraints as in the function I wrote. maxW[1:9]=0.2 says that for securities from 1 to 9 (which is all of them) set the maximum weight for each of them as 20%.

The output from this function call is:

Title:
MV Tangency Portfolio
Estimator: covEstimator
Optimize: minRisk
Constraints: maxW

Portfolio Weights:
0.0000                0.0000                 0.2000                0.0335         0.2000
0.2000               0.1760                  0.1905               0.0000

Covariance Risk Budgets:
0.0000                0.0000                0.1301               0.0286           0.1409
0.2407               0.1773                0.2823                0.0000

Target Return and Risks:
mean        mu        Cov      Sigma      CVaR       VaR
0.0006   0.0006  0.0161   0.0161   0.0398   0.0224

This obviously runs much faster, and gives greater and more readable information than the function I wrote. Oh well. It is interesting to see that the weights given for a couple of the securities are different. Not having read the code written by the authors of this function, I am more inclined to trust the results of the brute force function I wrote, however the difference is most likely due to different covariance estimation methods/procedures.

It is commonly known that portfolio weights in a Markowitz mean-variance optimization framework are very sensitive to the estimated means and covariances, and even differences in rounding can lead to fairly different weights. Also, technically, we are supposed to be using expected returns as input and not historical returns. Using historical returns assumes that the returns of each period are independent, come from the same distribution and sample the true distribution of the security. All of these assumptions can be very easily shown to be false.

In the next post, I will experiment with some of the graphs and plots we can make using fPortfolio.

Filed under Finance, Portfolio Optimization

## Portfolio Optimization

Changing tracks, I want to now look at portfolio optimization. Although this is very different from developing trading strategies, it is useful to know how to construct minimum-variance portfolios and the like, if only for curiosity’s sake. Also, just a -I hope unnecessary- note, portfolio optimization and parameter optimization (which I covered in the last post) are two completely different things.

Minimum-variance portfolio optimization has a lot of problems associated with it, but it makes for a good starting point as it is the most commonly discussed optimization technique in classroom-finance. One of my biggest issues is with the measurement of risk via volatility. Security out-performance contributes as much to volatility -hence risk- as security under-performance, which ideally shouldn’t be the case.

First, install the package tseries:
install.packages(‘tseries’)

The function of interest is portfolio.optim(). I decided to write my own function to enter in a vector of tickers, start and end dates for the dataset, min and max weight constraints and short-selling constraints. This function first processes the data and then passes it to portfolio.optim to determine the minimum variance portfolio for a given level of return. It then cycles through increasingly higher returns to check how high the Sharpe ratio can go.

Here is the code with comments:

```minVarPortfolio= function(tickers,start='2000-01-01',end=Sys.Date(),
riskfree=0,short=TRUE,lowestWeight=-1,highestWeight=1){

require(tseries)

#Initialize all the variables we will be using. returnMatrix is
#initailized as a vector,with length equal to one of the input
#ticker vectors (dependent on the start and end dates).
#Sharpe is set to 0. The weights vector is set equal in
#length to the number of tickers. The portfolio is set to
#NULL. A 'constraint' variable is created to pass on the
#short parameter to the portfolio.optim function. And vectors
#are created with the low and high weight restrictions, which
#are then passed to the portfolio.optim function as well. ##

returnMatrix=vector(length=length(getSymbols(tickers,
auto.assign=FALSE,from=start,to=end)))
sharpe=0
weights=vector(,length(tickers))
port=NULL
constraint=short
lowVec=rep(lowestWeight,length(tickers))
hiVec=rep(highestWeight,length(tickers))

#This is a for-loop which cycles through the tickers, calculates
#their return, and stores the returns in a matrix, adding
#the return vector for each ticker to the matrix

for(i in 1:length(tickers)){
temp=getSymbols(tickers[i],auto.assign=FALSE,from=start,to=end)
if(i==1){
}
else{
}
}

returnMatrix[is.na(returnMatrix)]=0
it

#This for-loop cycles through returns to test the portfolio.optim function
#for the highest Sharpe ratio.
for(j in 1:100){

#Stores the log of the return in retcalc
retcalc=log((1+j/100))
retcalc=retcalc/252
print(paste("Ret Calc:",retcalc))

#Tries to see if the specified return from retcalc can result
#in an efficient portfolio
try(port<-portfolio.optim(returnMatrix,pm=retcalc,shorts=constraint,
reslow=lowVec,reshigh=hiVec,riskfree=riskfree),silent=T)

#If the portfolio exists, it is compared against previous portfolios
#for different returns using the #Sharpe ratio. If it has the highest
#Sharpe ratio, it is stored and the old one is discarded.
if(!is.null(port)){
print('Not Null')
sd=port\$ps
tSharpe=((retcalc-riskfree)/sd)
print(paste("Sharpe",tSharpe))

if(tSharpe>sharpe){
sharpe=tSharpe
weights=port\$pw
}}

}
print(paste('Sharpe:', sharpe))
print(rbind(tickers,weights))
return(returnMatrix)

}```

Created by Pretty R at inside-R.org

This code works fine except for when the restrictions are too strict, the portfolio.optim function can’t find a minimum variance portfolio. This happens if the optimum portfolio has negative returns, which my code doesn’t test for. For this reason, I wanted to try out other ways of finding the highest Sharpe portfolio. There are numerous tutorials out there on how to do this. Some of them are:

After I run my function, with the following tickers and constraints:

matrix=minVarPortfolio(c(‘NVDA’, ‘YHOO’, ‘GOOG’, ‘CAT’, ‘BNS’, ‘POT’, ‘STO’, ‘MBT’ ,’SNE’),lowestWeight=0,highestWeight=0.2,start=’2000-01-01′, end=’2013-06-01′)

This is the output I get:

 “Sharpe: 0.177751547083007”

tickers                “NVDA”                                   “YHOO”                        “GOOG”
weights “-1.58276161084957e-19”      “2.02785605793095e-17”           “0.2”
tickers                 “CAT”                                       “BNS”                           “POT”
weights “0.104269676769825”                           “0.2”                             “0.2”

tickers                 “STO”                                       “MBT”
weights “0.189985091184918”             “0.105745232045257”

tickers                 “SNE”
weights “-2.85654465380669e-17”

The ‘e-XX’ weights basically indicate a weighting of zero on that particular security (NVDA, YHOO and SNE above). In the next post I will look at how all this can be done using a package called ‘fPortfolio’. Happy trading!