Tag Archives: R

R-ratio vs mean-variance optimization

I am looking at the following tickers from 2005-01-01 to 2008-01-01. The tickers are GE, F, MSFT, DELL, INTC. I will find the mean-variance weights and the R-ratio weights and then test portfolio performance from 2008-01-01 to 2010-01-01. This is a pretty limited test as the portfolio weights will be static over the test period, which is very unrealistic. But it still will give us a general idea of the relative performance of each optimization technique in case of extreme events (2008 crisis in this case).

Using the procedures I went over in my previous posts I obtained the following weights for each technique:

Mean-Variance: GE-19.8%   F-0%   MSFT-70.4%  DELL-0%   INTC-9.8%

R-ratio: GE-80.3%   F-0.7%   MSFT-14%   DELL-0%   INTC-5%

These are the results for the mean-variance portfolio from 2005-01-01 to 2008-01-01

Cumulative Return: 0.25507335    Annual Return: 0.07888944

Annualized Sharpe Ratio: 0.47503374  Win %: 0.51128818

Annualized Volatility: 0.16607124     Maximum Drawdown: -0.21479303

Max Length Drawdown: 236.00000000
ImageAnd this is how the mean-variance portfolio did from 2008-01-01 to 2010-01-01:

Cumulative Return: -0.3332456    Annual Return: -0.1831219

Annualized Sharpe Ratio: -0.4551308  Win %: 0.5039683

Annualized Volatility: 0.4023501   Maximum Drawdown: -0.6618125

Max Length Drawdown: 504.0000000


Now, moving on to returns from the r-ratio portfolio from 2005-01-01 to 2008-01-01:

Cumulative Return: 0.10471265    Annual Return: 0.03384322

Annualized Sharpe Ratio: 0.23811103 Win %: 0.50464807

Annualized Volatility: 0.14213208   Maximum Drawdown: -0.13000366

Max Length Drawdown: 345.00000000


And from 2008-01-01 to 2010-01-01

Cumulative Return: -0.5929209 Annual Return: -0.3614045

Annualized Sharpe Ratio: -0.7272896  Win %: 0.4920635

Annualized Volatility: 0.4969197   Maximum Drawdown: -0.8058170

Max Length Drawdown: 444.0000000


Very interesting results. The R-ratio portfolio performed substantially worse than the minimum-variance portfolio, contrary to my original hypothesis. This may well be due to the very small size of the portfolio we were testing, leaving it vulnerable to idiosyncratic risk. For more robust results, I will need to run these tests on portfolios of much greater size to wash out idiosyncratic risk. I will need to conduct this test on various portfolios of a fixed size (number of securities and market cap) to get a broader understanding of the variation in performance between the two optimization techniques.

It could also be that the distribution of the securities changed, rendering the CVaR values used to calculate the R-ratio wrong. In fact, we would expect the distributions to change, given the magnitude of the events which occurred in 2008 (and how far out in the tails they were considered to be).

Leave a comment

Filed under Finance, Portfolio Optimization

Graphing with fPortfolio

Now to making pretty-looking graphs and charts for portfolio optimization! The first thing we will do is determine the frontier for our combination of securities. Remember, the variable returnsMatrix below is a matrix of returns for all the securities in your portfolio.

This gives us the frontier. If you type in ?frontierPlot and read through, you will find out all the interesting plots you can make.

We can plot this by:


The circles in dark mark the efficient frontier and the grid() function just makes it look nicer. We can now add to this plot by doing the following:



This added the minimum variance point, the capital market point and the tangency point. The tangency point is marked with an ‘x’ in yellow and lies in exactly the same location as the capital market point in blue. We can pile on even more stuff:



So we now have the tangency line in blue, the Sharpe ratio line in orange, some of the visible asset points in black and the efficient frontiers for all possible combinations of two assets in our portfolio. Looking at this we can kind of see how the assets contribute to the portfolio efficient frontier, and why some assets are highly weighted while others are weighted at 0. Another very interesting chart is obtained by:



This displays the weights on the different securities, the risk, and the return along the frontier. The black line through the chart indicates the minimum variance portfolio. Let’s create some graphs for the tangency portfolio:



This gives a pie-chart of the weights on the securities in the tangency portfolio.



This gives a pie-chart of the weighted returns of the tangency portfolio.

Leave a comment

Filed under Finance, Portfolio Optimization

Portfolio Optimization with fPortfolio

fPortfolio contains a number of function to make portfolio optimization easier. I can compare the results I get from the functions in fPortfolio to the results from my function from the previous post. I don’t expect them to be exactly the same, but they should be broadly similar.

First, install and load the package:

Next, you need to build a returns matrix for the securities you are interested in. You can create return vectors for the different tickers (using methods from an earlier post) and then combine them together using cbind(). The function I wrote in the previous post also returns a matrix of security returns, so you can just use that code as well.

This is the function for the tangency or (highest Sharpe ratio) portfolio:

Here I set the same constraints as in the function I wrote. maxW[1:9]=0.2 says that for securities from 1 to 9 (which is all of them) set the maximum weight for each of them as 20%.

The output from this function call is:

MV Tangency Portfolio
Estimator: covEstimator
Solver: solveRquadprog
Optimize: minRisk
Constraints: maxW

Portfolio Weights:
NVDA.Adjusted     YHOO.Adjusted     GOOG.Adjusted    CAT.Adjusted   BNS.Adjusted
      0.0000                0.0000                 0.2000                0.0335         0.2000
POT.Adjusted       STO.Adjusted        MBT.Adjusted       SNE.Adjusted
      0.2000               0.1760                  0.1905               0.0000

Covariance Risk Budgets:
NVDA.Adjusted     YHOO.Adjusted    GOOG.Adjusted    CAT.Adjusted   BNS.Adjusted
      0.0000                0.0000                0.1301               0.0286           0.1409
POT.Adjusted       STO.Adjusted       MBT.Adjusted      SNE.Adjusted
     0.2407               0.1773                0.2823                0.0000 

Target Return and Risks:
  mean        mu        Cov      Sigma      CVaR       VaR
0.0006   0.0006  0.0161   0.0161   0.0398   0.0224

This obviously runs much faster, and gives greater and more readable information than the function I wrote. Oh well. It is interesting to see that the weights given for a couple of the securities are different. Not having read the code written by the authors of this function, I am more inclined to trust the results of the brute force function I wrote, however the difference is most likely due to different covariance estimation methods/procedures.

It is commonly known that portfolio weights in a Markowitz mean-variance optimization framework are very sensitive to the estimated means and covariances, and even differences in rounding can lead to fairly different weights. Also, technically, we are supposed to be using expected returns as input and not historical returns. Using historical returns assumes that the returns of each period are independent, come from the same distribution and sample the true distribution of the security. All of these assumptions can be very easily shown to be false.

In the next post, I will experiment with some of the graphs and plots we can make using fPortfolio.


Filed under Finance, Portfolio Optimization

Portfolio Optimization

Changing tracks, I want to now look at portfolio optimization. Although this is very different from developing trading strategies, it is useful to know how to construct minimum-variance portfolios and the like, if only for curiosity’s sake. Also, just a -I hope unnecessary- note, portfolio optimization and parameter optimization (which I covered in the last post) are two completely different things.

Minimum-variance portfolio optimization has a lot of problems associated with it, but it makes for a good starting point as it is the most commonly discussed optimization technique in classroom-finance. One of my biggest issues is with the measurement of risk via volatility. Security out-performance contributes as much to volatility -hence risk- as security under-performance, which ideally shouldn’t be the case.

First, install the package tseries:

The function of interest is portfolio.optim(). I decided to write my own function to enter in a vector of tickers, start and end dates for the dataset, min and max weight constraints and short-selling constraints. This function first processes the data and then passes it to portfolio.optim to determine the minimum variance portfolio for a given level of return. It then cycles through increasingly higher returns to check how high the Sharpe ratio can go.

Here is the code with comments:

minVarPortfolio= function(tickers,start='2000-01-01',end=Sys.Date(),

# Load up the package

#Initialize all the variables we will be using. returnMatrix is 
#initailized as a vector,with length equal to one of the input 
#ticker vectors (dependent on the start and end dates).
#Sharpe is set to 0. The weights vector is set equal in 
#length to the number of tickers. The portfolio is set to 
#NULL. A 'constraint' variable is created to pass on the 
#short parameter to the portfolio.optim function. And vectors 
#are created with the low and high weight restrictions, which
#are then passed to the portfolio.optim function as well. ##


#This is a for-loop which cycles through the tickers, calculates 
#their return, and stores the returns in a matrix, adding 
#the return vector for each ticker to the matrix

   for(i in 1:length(tickers)){


#This for-loop cycles through returns to test the portfolio.optim function 
#for the highest Sharpe ratio.
    for(j in 1:100){

#Stores the log of the return in retcalc
	print(paste("Ret Calc:",retcalc))

#Tries to see if the specified return from retcalc can result 
#in an efficient portfolio

#If the portfolio exists, it is compared against previous portfolios 
#for different returns using the #Sharpe ratio. If it has the highest 
#Sharpe ratio, it is stored and the old one is discarded.
        print('Not Null')


    print(paste('Sharpe:', sharpe))


Created by Pretty R at inside-R.org

This code works fine except for when the restrictions are too strict, the portfolio.optim function can’t find a minimum variance portfolio. This happens if the optimum portfolio has negative returns, which my code doesn’t test for. For this reason, I wanted to try out other ways of finding the highest Sharpe portfolio. There are numerous tutorials out there on how to do this. Some of them are:


After I run my function, with the following tickers and constraints:

matrix=minVarPortfolio(c(‘NVDA’, ‘YHOO’, ‘GOOG’, ‘CAT’, ‘BNS’, ‘POT’, ‘STO’, ‘MBT’ ,’SNE’),lowestWeight=0,highestWeight=0.2,start=’2000-01-01′, end=’2013-06-01′)

This is the output I get:

[1] “Sharpe: 0.177751547083007”

tickers                “NVDA”                                   “YHOO”                        “GOOG”
weights “-1.58276161084957e-19”      “2.02785605793095e-17”           “0.2”
tickers                 “CAT”                                       “BNS”                           “POT”
weights “0.104269676769825”                           “0.2”                             “0.2”

tickers                 “STO”                                       “MBT”
weights “0.189985091184918”             “0.105745232045257”

tickers                 “SNE”
weights “-2.85654465380669e-17”

The ‘e-XX’ weights basically indicate a weighting of zero on that particular security (NVDA, YHOO and SNE above). In the next post I will look at how all this can be done using a package called ‘fPortfolio’. Happy trading!

Leave a comment

Filed under Finance, Portfolio Optimization

Parameter Optimization for Strategy 2

Now, let’s try some parameter optimisation for the SMA strategy! There probably are functions out there on R which I can use to do this, but I figured it would take me as long to actually code it as it would to find something usable on the internet, and I enjoy coding much more than looking stuff up on the internet.

My aim is to find out which SMA is the best to use for going long, and which SMA is the best to use for going short on the S&P 500. Ideally, I should optimize the short SMA for each long SMA (or vice-versa) to find the best combination, but I don’t think optimizing them independently (as I did here) would make much of a difference in this case. This is the code I wrote:



   for( i in smaInit:smaEnd){

	sharpe=SharpeRatio.annualized(stratRets, scale=252)


	sharpe=SharpeRatio.annualized(stratRets, scale=252)

   print(cbind(bestSMA, bestSharpe))

Created by Pretty R at inside-R.org

It is pretty straight-forward and self-explanatory. It initiates a loop which goes through each SMA from smaInit to smaEnd and stores the one with the highest Sharpe ratio. For more complicated strategies, we will need to do a little bit more heavy-lifting when it comes to parameter optimization. This code maximizes the Sharpe ratio, but you can easily modify it to maximize returns, minimize volatility, etc. The highest Sharpe ratio SMA to use for the long position is the 70-Day SMA and for the short position is the 84-day SMA.

After running the strategy with the optimized parameters, these are the performance results:


Cumulative Return: 0.31104898   Annual Return: 0.04286661

Annualized Sharpe Ratio: 0.18405777   Win %: 0.53078556

Annualized Volatility: 0.23289757    Maximum Drawdown: -0.28943309

Max Length Drawdown: 1078.00000000

Not a huge difference from what we had before. And the little bit of performance improvement that we achieved is probably more a result of curve-fitting than anything else. If your initial parameter values conform with some market intuition -and thus capture most of the obtainable market return- parameter optimization will not be that helpful in a paper-trading implementation of your strategy, as the improvements will mostly be due to curve-fitting the historical data.

Leave a comment

Filed under Finance, Trading Strategies

Strategy 2: Riding the SMA Curve

This is the least complicated trend strategy in existance. You buy and hold the security as long as the security price is above a XXX-Day Simple Moving Average (SMA), and you can short it if it is below the SMA curve. The important question with this strategy is what the length in days of the SMA should be. We can run a test of different SMAs to see which one is most profitable/ least risky, and then choose accordingly. The intuition in choosing this should be based on an understanding of how long price trends usually last for a given security. This will obviously vary for different markets, different securities and across time periods.

A more technical approach to estimating the optimal variant of the SMA to use could be derived from one of the many parameter optimisation techniques avaiable through R, or through coding it yourself. These run the risk of curve-fitting, but as long as you are aware of the dangers associated with that, this could be one thing for you to try. I’ve found through experience that the 200-Day SMA works best for the S&P 500 as a whole, so I will be running this backtest using that.

Let’s say that if the market closes above its 200-Day Daily High SMA on any given day, we go long the next day and if it closes below it 200-Day Daily Low, we go short.

1-Get the data:

2-Calculate the 200-Day SMAs:

3-Calculate the lagged trading signal vector:

4-Get rid of the NAs:

5-Calculate returns vector and multiply out the trading vector with the returns vector to get the strategy return:

6-Run performance analytics:


Note: The Performance function I got from somewhere on the internet. I can’t remember where exactly so unfortunately I can’t directly credit the source. Regardless, I’m thankful to whoever wrote it. Here is the code:

Performance <- function(x) {

	cumRetx = Return.cumulative(x)
	annRetx = Return.annualized(x, scale=252)
	sharpex = SharpeRatio.annualized(x, scale=252)
	winpctx = length(x[x > 0])/length(x[x != 0])
	annSDx = sd.annualized(x, scale=252)

	DDs <- findDrawdowns(x)
	maxDDx = min(DDs$return)
	maxLx = max(DDs$length)

	Perf = c(cumRetx, annRetx, sharpex, winpctx, annSDx, maxDDx, maxLx)
	names(Perf) = c("Cumulative Return", "Annual Return","Annualized Sharpe Ratio",
		"Win %", "Annualized Volatility", "Maximum Drawdown", "Max Length Drawdown")

Created by Pretty R at inside-R.org

We get the following results (our sample period is 2007-01-01 to 2013-06-19):

Cumulative Return: 0.10875991   Annual Return: 0.01613934

Annualized Sharpe Ratio: 0.06700363   Win %: 0.55157505

Annualized Volatility: 0.24087258   Maximum Drawdown: -0.59577736

Max Length Drawdown: 1411.00000000

Cumulative Return: 0.30075987  Annual Return: 0.04159395

Annualized Sharpe Ratio: 0.17975121 Win %: 0.53215078   

Annualized Volatility: 0.23139732   Maximum Drawdown: -0.35921405  

Max Length Drawdown: 1078.00000000 

So this strategy performed remarkably well compared to the market. 1-0 for Trend strategies! Looking at the graphs it’s easy to see the points where the strategy mirrors the market returns (and hence is short) and where the strategy follows the market returns (and hence is long). So does that mean we can start trading using this algorithm and make money? Not really- one important point to keep in mind is that backtesting can only be used to reject strategies, not to accept them. It is possible that this strategy can make money going forward, but who really know what the market will do? At least we can’t outright reject this strategy as useless.

What we are essentially saying is that under the conditions we used, our strategy was profitable. If those conditions were to continue, or repeat themselves, than we would have a profitable strategy in our hands. Are those conditions likely to repeat themselves? If market reactions to different stimuli are consistent across time, and if those stimuli re-occur then yes, perhaps. There’s also something to be said here about interactions between the different reactions or stimuli and on their subsequent effects on market outcomes. Is this something we can test for? Maybe, but its not something I’ll be getting into for now.

Leave a comment

Filed under Finance, Trading Strategies

Strategy 1 Extended (Part 2)

We can extend our strategy and make it more profitable by incorporating short selling. Our annualized volatility will go up, but it will be interesting to see what happens to the annualized return. This is a very simple modification to make.

1-First we create a short selling vector:

shortVec=ifelse(((Cl(SPY)<Op(SPY))),-1,0) * ifelse(lag(Cl(SPY),1)<lag(Op(SPY),1),-1,0) * ifelse(lag(Cl(SPY),2)<lag(Op(SPY),2),-1,0) #This is saying that if the stock closes down for three consecutive days, short it.

2-As before, we lag it and get rid of the NAs:


3-Now we add the short-signal vector to the lagged and NA-removed long-signal vector we had before:

4-And as before, multiply the trading vector with the S&P return vector to get daily strategy returns, then run performance analytics.


So with this modification, the annualized volatility rises to 9.40% approximately, and the annualized return falls to -7.10%. Not too good.

The strategy above, and all the subsequent modifications, were momentum based strategies. They rely on large, directed, short-term price movements to be profitable and don’t do too well when the price movements are small and directionless, but the market itself is following an overall trend. The strategies which do well in a trending market are called (!!!) trend strategies. We will look at one in the next post.

Another very important thing I ignored in computing the returns is adjustments for splits and dividends. This can be done using the adjusted price information provided for most equities, and that is what I will be using to calculate returns from now on. By using the adjusted price information however, we are not able to simulate when exactly we enter and exit the market (Open, Close), and that is a tradeoff we’ll have to make for greater convenience. However, I will still be using opening and closing price information to compute the trading signals.

Leave a comment

Filed under Finance, Trading Strategies