Is our random forest model for OOS predictions improving with time? A Bayesian look.

In September 2016 I published a post about the results of our random forest model for out of sample predictions in our price action based system repository at Asirikuy. The aim of this model is to use the in-sample results of trading strategies – meaning their back-testing results – to predict the first six months of real out-of-sample performance(that is, the results of the first 6 months of live trading). In this post I talked about prediction thresholds and how a 0.5 value was optimum at this time giving an improvement in a testing set of around +110% with a maximum posterior probability near 40%. Now that more than 8 months have passed – and with a ton more data now available to our model – we’ll take a second look into the results of our random forest algorithm and whether or not it’s improving as a function of time.

In order to address the above I repeated the Bayesian analysis I carried out last time in order to see how variables change as a function of the model classification threshold, whether the same threshold remains optimal and whether there is any change in how the posterior probability and performance values behave. In the same manner as last time the testing set was obtained by taking the last 20% of data from the data set since the data is time sensitive. This is necessary because if we just perform random splits over the data we see artificially better results since we get data in the training set that could never be there given that the data is not generated all at the same time but across multiple market conditions that evolve as a function of time. The dataset is much larger with a total size of 5187 points compared to less than 2000 points when this analysis was last done.

The first interesting thing to note is that the best prediction threshold remains basically the same with the value giving the best improvement in performance and accuracy being 0.5 with a relative improvement in trading performance of +180.55% and an accuracy of 63%. In line with our last analysis the posterior probability increases till around 0.5 but rather than decreasing sharply after that it remains fairly constant around this value, declining only to 49.92% this time compared to the decline to around 34% we saw last time. The maximum posterior probability is also much higher with a value of around 52.1%. This value represents the probability that a system gives a profitable out-of-sample result given that the system is classified positively by the machine learning algorithm.

In line with normal classifier behavior the specificity increases as a function of the classification threshold while the sensitivity decreases, however at the optimum classifier threshold of 0.5 we find that we now have a sensitivity of 21.2% with a specificity of 88.36% while last time these values were closer to a 10% sensitivity with a specificity closer to 95% at the optimum. A higher sensitivity means that an OOS profitable system now has a higher chance of testing positive in the random forest test, implying that our performance now comes from a portfolio of trading systems that is larger than before (because we simply trade more systems). Since our posterior probability is now around 12% larger (52% vs around 40% last time) this means that not only are we likely to select more systems but these systems are more likely to be positive OOS performers.

Overall the additional data we have gathered seems to point to the fact that our classifier has improved substantially from a Bayesian perspective. Not only do we have a higher sensitivity with a higher posterior probability but our improvement in performance has also increased dramatically compared with our last analysis. This is very encouraging as it means that additional data is providing our model with information that is useful from a forward looking perspective.

Although we now have more than 5K points these still do not represent the entirety of systems we have mined – since we have lots of systems that do not have the necessary six months of out-of-sample performance to be included yet – so we should be able to double the size of this set during the next six months (as systems we have already mined accumulate out-of-sample data). Once we achieve this I will repeat this analysis so that we can see if the model continues to improve or reaches a point of diminishing returns. If you would like to learn more about our trading and how you too can trade using a repository of thousands of price action based strategies please consider joining Asirikuy.com, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading.strategies.

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply

Subscribe to RSS Feed Follow me on Twitter!
Show Buttons
Hide Buttons