On my last blog post we discussed how we can detect system failure based on some statistical thresholds that can tell us with a high confidence that our strategy has stopped performing in the manner expected from its historical results. However the usage of such blunt stopping techniques should not be our only weapon for strategy management since it is clear that a strategy may become more or less valuable to us before reaching a point where we should discard it based on its deterioration from a statistical stand-point. Within the following paragraphs I will go through my views about the use of progressive discarding techniques, what they can achieve for you, what they won’t achieve for you and how they can serve as a first line of defense for the management of even large trading strategy arrays.
–
–
In order to understand progressive discarding, we must first understand that there are several ways beyond single system statistical failure in which a system can fail. A system can fail by itself – it can reach a point where it no longer matches historical expectations – but it can also fail within a group. This means that a system’s performance can also fail because its interaction with other systems becomes destructive or redundant. Imagine that you’re trading a portfolio of 10 trading strategies and one of these strategies suddenly becomes too well correlated with one of your other systems. The correlation becomes so high that in fact you can now say that using this system has become “redundant”, it is now evident that you shouldn’t be trading it with the same risk if you’re trading the other strategy as well. Your system could also become destructive, as it could start increasing the variance of your portfolio beyond what you consider acceptable limits.
However it is clear that the above measures are not so well suited to the idea of a “line in the sand worst case” (blunt discarding) because the strategy can fail within a group without failing statistically on its own. This means that we may have a reason to consider that the role of the system should be changed (trade higher or lower risk) but we should not simply “remove it”. There are several tools we can use to assess how the role of a system should change as a function of time within a group, but the best method – also one of the easiest to apply in my view – is the use of Markowitz optimizations in order to find out what the historically optimum weights for our different systems are. The Markowitz procedure takes immediate care of all the above issues as the process in itself will rebalance weights to give systems more or less importance as the way in which they become better or worse “team players” (contribute to variance reduction or increases) moves. See this blog post to learn how to carry out a Markowitz optimization analysis using R.
The Markowitz procedure will also naturally reduce a system’s weight as it approaches a performance derived worst case metric but it will take much longer than a blunt statistical force measure to remove a system. We can safely remove systems with excellent confidence that they have reached a mismatch with their historical distribution of returns on sharp drawdown scenarios that have certain characteristics, while the Markowitz analysis will see this as a “spot on a canvas” that will only reduce the contribution of a system by some amount. I would not advocate for using Markowitz analysis as an all-inclusive solution, because there are some scenarios where we should clearly discard strategies bluntly and a Markowitz approach is bound to get you into additional loses that could have been avoided. If a strategy fails in an individual worst case it should always be removed.
–
–
In my case I prefer to use a mixed approach between Markowitz rebalancing (progressive management) and single system statistical worst cases (blunt measures). The idea here is to monitor systems for their statistical worst cases – when they fail compared to our back-testing expectations – and to remove them whenever they reach these levels, however we also periodically perform a Markowitz rebalancing of the portfolio to ensure that performance remains balanced between our systems as some deteriorate and others perform better. The Markowitz rebalancing should also be performed across a large amount of history to cover large possible market conditions and it should not be performed very frequently (for most purposes once or twice a year is more than enough). Doing more frequent rebalancing can make you change the weights between your strategies too frequently, while having long rebalancing periods might cause you unnecessary loses with systems that start to play “worse” as a team.
A progressive system rebalancing approach to reduce risk and control a “worst case” scenario as a behavior within a group will allow you to manage the deteriotation of strategies, even when their single system statistical worst case scenarios have not been reached. However you need to consider that you will eventually have to face some regret (as in the game theory concept) because sometimes you will reduce weights for strategies that were just within temporarily “bad periods” and you will miss being “fully loaded” within their recovery phase. This means that overall profit to drawdown ratios for your portfolio will deteriorate on a “best case” as a consequence of the rebalancing effort (because you open up the possibility of “guessing wrong”) while you will have some insurance in case some strategy fails. Since we know for sure that all systems eventually fail, the payment of this small insurance is worth it. The worst case cost of this insurance also falls down as the number of systems grows (because when you have many systems the probability of being heavily and wrongly invested in a single one decreases) so in general using as many systems as possible within your capital and technical constraints is a good idea.
If you would like to learn more about system worst cases as well as how you can easily carry out Markowitz rebalancing using our python/R analysis tools please consider joining Asirikuy.com, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading in general . I hope you enjoyed this article ! :o)
Hi Daniel,
are you saying that provided Markowitz weights managenment is used, it’s better to build portfolios made up by a large number of systems ?
In othr words, provided that they are well balanced by Markowitz optimization, also a portfolio of composed by 20 strategies (possibly little correlated) it’s OK to trade ?
Best,
Rodolfo
Hi Rodolfo,
Thanks for posting :o) Provided you can manage this number of systems within your technical/capital constraints it’s better. If you perform Markowitz re-balancing on a periodic basis (every year for example) it’s better to use a larger number of systems. This can be demonstrated from a game theory perspective, because your worst case regret (putting a lot of weight on a single system that will under-perform heavily) is reduced as the system number increases, provided the correlation of returns also remains low.
Note that this makes a lot of assumptions as well, such as using systems that have no serial auto-correlations, trade-chain independent systems, etc. As you know we learned from experience – remember our Coatl trading accounts – that you can have heavy loses due to the simultaneous use of a large number of systems that have some of these problems and aren’t properly balanced/rebalanced. Markowitz is definitely a key part of the puzzle. I hope this answers your question :o) Thanks a lot for posting,
Best Regards,
Daniel
Hi Daniel,
thank you for your reply.
OK, provided that minimum requirements you mention are satisfied, it’s good to know that could use large arrays of strategies.
Capital constraints shouldn’t be an issue at least if trading with Oanda due to their lot sizing policy and applying proper money management.
Best,
Rodolfo