TSLA, Covestor, and potential new project



I saw yet another article on Tesla Motors’ (TSLA) expansion. This time it was about Panasonic’s $30 million investment in Tesla. A few months ago Toyota invested $50 million; last month they inked a partnership with Tesla to use their all-electric powertrains in the new EV RAV4. Daimler has also invested heavily in Tesla’s technology.  Right now, Tesla is burning money. Tesla may not become the next big car manufacturer, but they do have their innovative powertrain technology. If they keep making these strategic alliances, a future where most electric vehicles on the streets use a Tesla powertrain is not that hard to imagine… (disclaimer: I own some TSLA shares).

I’ve switched brokers from Zecco to Interactive Brokers so my tracking portfolio on Covestor will no longer be updated daily (I have to send Covestor my monthly IBrokers statements). On Covestor I track one of my ETF rotation algorithms.

Although moving on from the Faber project, I still intend to keep using/learning more about R. I’m currently reading the paper “The Extreme Future Stock Returns Following I/B/E/S Earnings Surprises” which analyzes the phenomenon of post earnings announcement drift (PEAD). Essentially, stocks drift upwards months after a positive earnings surprise (a very anti-EMH phenomenon). Getting my hands on the data and replicating the research in R would be fun and educational.

R Faber model trade stats

cagr   0.07370385 0.04863027

volat  0.06720597 0.15948145

sharpe 1.09668602 0.30492742

maxdd  0.11335030 0.56876409

1st column is using the market timing mechanism mentioned on Faber’s paper, 2nd column is simple buy and hold on the S&P. Sharpe is calculated with a risk free rate of 0%. Looks like it trounced B&H; however, there’s still room for improvement. Eg the B&H Sharpe on bonds is not that much worse at 0.95, and the CAGR is slightly better by 0.6%. Additionally, market outperformance in the short term is far from optimal: for example, YTD, the timing model had a -3.2% return, while you would’ve earned almost 14% buying and holding 10-year Treasuries, and about break even buying and holding the S&P. Obviously the fixed 20% allocation isn’t the best. Perhaps dynamic allocation based on momentum, volatility, and correlation (to each other) would help.

Basic trade stats done for Faber model, indices in isolation

In general, use of the 10 week SMA timing system improved returns relative to volatility (measured by Sharpe) and max drawdown over the same stats for buying and holding the same index.

There were some discrepancies in the stats that I produced with R and the ones that Faber claimed in his paper. Especially in the trade stats for trading the US Govt 10 Year Bonds (ignore the gspc label).


For the same period (1973 through 2008) Faber has a buy and hold CAGR of 8.69% and a timing CAGR of 8.79%, whereas I obtained 8.3% for B&H and only 6.5% for timing. Faber has a max drawdown of 18.79% for B&H and 11.2% for timing, but I obtained, through R, about a 16.5% max dd for both B&H and timing. Not huge discrepancies, and they seem to be less significant for the other 4 indices; quantifying instead of eyeballing the differences in stats might help uncover the cause, but at this point this is a proof of concept, not a rigorous trading system (yet), so I think I’ll move onto modeling Faber’s portfolio.

My research “queue”

Here’s a rough list of what I want to look at in the near future:

  • replicating Faber’s model, and improving it
  • measuring idiosyncratic returns, or the degree of stock returns affected by internal factors (company news, performance, etc.), isolated from the effect of external factors (macro, sector, and other systemic factors). The type of analysis in this article by Matthew Rothman MD and Head of Quantitative Equity Strategies at Barclays (zero hedge – Alpha is dead) is what I’m talking about.
  • improving and further analyzing the robustness of one of my ETF timing strategies. Currently tracking daily performance at covestor.com/troy-shu

Will post my progress, methods, code, etc. along the way.

ARTICLE: the future of quant finance




I completely agree with the above article. Quant finance is dominated by high frequency trading; actually, in most people’s minds HFT is quant/computation finance. Everyone’s using the same price and volume market data, trying to squeeze out profits by trading at the lowest latency possible. HFT is being commoditized. So you look at other places in the value chain where the performance isn’t good enough yet: one example is, as the article calls it, using exogenous instead of endogenous market data. This is the kind of data that Bloomberg and Reuters don’t provide, the kind of data that no one uses… yet. The first thing that comes to mind is http://www.thestocksonar.com/, which semantically analyzes news articles for positive/negative sentiment on stocks using machine learning/AI techniques.

Despite all the uses of algorithms and computers today, the human brain is still our most valuable asset. We, not computers, decide how to differentiate ourselves from our competitors—what new strategies to research and trade, what new kinds of data to use. The “human aspect” is still paramount in quant finance.

Faber’s market timing 2

R source for graph, calculation of equity for 10 week trading system seen below. S&P 500 index data since 1973.  Playing around with an example is a great way to learn.

Finishing up the Faber system for the 5 indices he uses (S&P, MSCI EAFE, GSCI, NAREIT, and 10 year bonds), trading each in isolation. Waiting for S&P and GS to respond to my query about where to find the total return series for the GS Commodity Index. Next step is to position size them at 20% each to generate one portfolio.

Faber’s Market Timing paper 1


Using the quantmod and TTR libraries in R. Graph of S&P 500 index from 1973 to present, with equity curve of simple 10-week SMA timing system on the S&P 500, used in Mebane Faber’s paper. Faber’s method for “tactical asset allocation” has produced great risk-adjusted returns for the past 40 years (backtested), which brings up some questions: is it robust? how can it be improved? I have several ideas for this… but first I want to see if I can replicate Faber’s original system in R.