Can the Lower Time Frames be Useful? : Discussing Information, Robustness and Reliability

To many of you it may not be a secret that I am no fan of the lower time frames in the Forex market (by this I mean charts in which OHLC data belongs to candles with time spans under one hour). The reasons for these are several and they go back to my first year of experiments in the Forex market. In this time I quickly found out that the use of these time frames generates systems that are not reliable and tend not to be back/live testing consistent. The reasons can be several, from the fact that variations in these time frames are significant between brokers to problems involving the exploitation of back-testing problems when profit and loss targets are too low. The influence of spread variations, slippage and such other things are simply too great when you go to the finer grains of the Forex historical data.

However I recently had to deal with some interesting information coming from at least 3 different sources – all of them Asirikuy members – who have done a lot of diligent research into the use of these time frames. Through their hard work they have observed that you can build systems that seem to comply with all standards of reliability if you use several candles and avoid the use of indicators. These two conditions were very interesting for me since they highlight a very significant fact which is that the problem with the lower time frames is not simply that the bars are less than one hour long, but that the information they contain is much smaller (by information I mean the amount of ticks that build each candle).

Think about the one hour time frame, it is constructed from the data obtained between each full hour in the forex market, the open/high/low/close (OHLC) information of price movements between two hour values (let us say from 2 to 3 pm as an example). Now what happens if we shift the 1H candle by 5 minutes, meaning that it starts at 2:05 and it ends at 3:05, does this new time frame – shifted by 5 minutes – suddenly becomes less reliable than our initial time frame ? The fact is that both contain the exact same amount of information and therefore both are prone to the same amount of differences between brokers and reliability issues. When you use a lower time frame in a cumulative way – meaning that you take several consecutive candles to make decisions – you are trading the same thing as you were if you simply shifted an upper time frame by some amount of time.

There is however a big difference which is that you are seeing a much finer grain picture of what would otherwise be the 1H time frame (you are looking at all possible 1H time frame constructs). For example if you are trading a volatility breakout using 12 five minute candles instead of a 1H candle, you will see that you will be able to see more breakouts because the 5 minute candles can align in very different ways. Instead of having to get your breakout contained within the 59 and 00 brackets of an hour, you can now see breakouts that may happen between 2:40 and 3:40. This is indeed extremely interesting because it means that you get “the whole picture” about the structure of many more “possible” 1H time frames instead of just a single one. Since all 1H time frame structures are equally valid – in theory – there should be no problem in getting better results using this method.

This is when I start to get nervous ;o). You can look at the results of 20 year simulations from systems using granular approaches like those described above (I have seen at least 4, including the RobinVol system made by Fernando — who is also an Asirikuy member) and you can see that the results are far better than if they used an upper time frame. Signals are much more plentiful and you can in fact get much better results in historical back-tests. The systems are using the same amount of information as an upper time frame system so they should in theory be just as reliable. The figures can be a bit staggering with profit to drawdown ratios that border the 4-7 mark and this is what makes me worry. There is something that all the systems and setups with very high average compounded yearly profit to maximum drawdown ratios that I have seen have in common and that is that they have all failed to take into account something fundamental that implies big problems going forward.

I will be honest, in the cases of the above strategies right now I have no idea of what they can be doing that can imply such a gross underestimation of drawdown (or overestimation of profit). All of these Asirikuy members have developed these systems with the highest standards in mind – stress tests, monte carlo simulations, back/live testing consistency analysis – and it all points that they should be what they appear to be. What is wrong then? As far as I can see the only thing that all of these systems have in common is some form of multiple position management system, meaning that they all allow the opening of simultaneous positions by acting on every signal that appears by adding positions. Some of these strategies do this to encourage trend following and another – perhaps the most powerful – does it with limit orders on S&R levels determined algorithmically (the member who discussed this system with me doesn’t want me to go into any further detail as the system is being used in a private business environment). My feeling is that there is some underestimation of historical correlations between these trades that will be paid for in the future but so far I have to say that this is just a feeling, there is no reason why these systems shouldn’t be tested.

So am I telling you to avoid these systems because they are going to crash and burn ? No, I have no reason to say they will right now. The truth is that these systems show astounding results that are way above the Barclay’s currency trader index and for this reason such simulations need to be studied in a very skeptical way. I am sure these people have done so themselves, and many of them have personally come to me to check their procedure and pull apart their systems to find any obvious problems. Right now we know that these systems are producing simulations that border insanity (regarding profit to draw down ratios) and it is therefore important to test these extremely thoroughly to find out if they are real or false.

I can however say that I wouldn’t recommend anyone to trade such setups on larger accounts until there is some confirmation that there is not a big problem that hasn’t been found through extensive live trading. As in the case of multiple system portfolios in the beginning we saw extremely large profit to draw down ratios (in these cases sometimes beyond 9) that looked to be realistically achievable but we were soon showed that this was not the case when we discovered things such as efficiency bursts and trade chain dependency. The problem might be something obvious that our current testing procedures simply ignore. 

The truth is that if such systems aren’t tested we will never know if they are truly what they seem like. I have helped some of these traders in all the ways I can to discover what might be wrong (from the simulation side) but now it seems that – if there is a problem – it is hidden somewhere that may require to actually trade them live to find out. To these traders, who I appreciate greatly as my dear pupils at Asirikuy, I can wish nothing but the biggest success — their victories are mine as well. In Asirikuy systems (mainly those based on pure price action) there are also signficant increases in profitability when shifting to this type of approach – using multiple candles from lower time frames to construct the upper time frame signals using the same amount of information – so we may possibly also help with one or two live accounts. However Asirikuy systems fail to achieve such high AAR/Max DD  figures without the use of some sort of basket (multiple positioning) implementation, further pointing out that – if there is a problem – it might be related with this feature instead. I will keep my eyes open on how these systems trade in order to find whether the light at the other side of the tunnel is just a freight train coming our way (paraphrasing James Hetfield here).

Understanding always plays a pivotal role, in this case we’ll see if what really matters is the amount of information, regardless of its structure. As the Asirikuy mantra says, understand, expect, evaluate. If you would like to learn more about trading and how you too can learn to design and evaluate your own trading strategies please consider joining, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading in general . I hope you enjoyed this article ! :o)

Print Friendly, PDF & Email
You can leave a response, or trackback from your own site.

12 Responses to “Can the Lower Time Frames be Useful? : Discussing Information, Robustness and Reliability”

  1. Fd says:

    Hi Daniel,

    I also think that data coming from time frames below H1 can be traded in principle. In fact the restriction to trade only above H1 works like a noise filter, giving you more reproducible backtests on different feeds. However, we have seen that robustness across feeds can be tuned. The same will apply to lower time frames: if you are using proper methods to measure price action there might be no problem. Classical indicators are very sensible to noise and thus are likely to show problems.

    Opening multiple positions is o.k. if risk is controlled properly. The algorithm needs to make sure that a configurable risk is not exceeded. But, are results indeed a lot better when comparing these strategies taking into account max risk induced by pyramiding?

    – if you stack up after the position went into desired direction and generates a win you would have been better off investing directly the cumulated amount of lots
    – if you stack up after the position went into the desired direction but closes in minus your loss will be bigger than investing directly the cumulated amount of lots
    – if you stack up after the position went against you but generates a win in the end, you have an advantage; however you should have good reasons to believe this as it is not apparent to do this
    – if you stack up after the position went against and closes in minus, you have a slightly lower loss

    It completely depends on the expectancy values for each scenario whether pyramiding can improve results. Thus analyzing backtest results in above categories will give the answer whether pyramiding was able to contribute to the above average results.

    Whether it would make sense in general to trade time frames below H1 can be answered by doing a frequency spectrum analysis of tick data. Because of the Nyquist–Shannon sampling theorem the sampling frequency of a continuous data stream needs to be more than twice the maximum frequency of the original signal. Sampling at a lower frequency (even H1 could be in fact too low) will lead to aliasing effects creating spurious signals not contained in the original data. Sampling at a (too) high frequency will give you an excess of meaningless noise which will not contribute to reconstructing the original signal. Most likely the optimal sampling rate will not be a round multiple of any standard time frame but some fractional. It could be completely justified that at current market speed e.g. 51 minutes would be the ideal spacing allowing to capture all significant features of the price movement.

    Best regards,

  2. Fernando says:

    Thanks for your nice words about RobinVOL :) I didn’t know you were looking at it. Here you have a one year live account:

    I already sent to you part of the code (the trade management subsystem) but if you are interested in any other part of the code or in testing/running the EA, just ask. Default settings trades on M45 as it uses 3 M15 bars to build the signal.

    My points on pyramiding is that:

    1. Not all signals has the same strength. A weak signal is better to be traded with a single position, but I want to be as loaded as possible on very strong signals.

    2. Any signal with positive math expectancy has to live until it reaches it’s target (be it SL or TP). Otherwise we are increasing the risk (or decreasing profits which is the same).

    The way we solved trade chain dependency in Asirikuy unfortunately makes it incompatible with this concept (and blocks porting RobinVOL to F4 which could be nice). When we have an opened trade and it comes a new signal, we just update the SL and TP of the opened signal, so we throw to the bin the pending profitability of the first signal (increasing our risk/decreasing our profitability potential).

    We just cut winners short.

    Our argument in Asirikuy was that it was easier to manage risk this way and I agree. With pyramiding you need to manage risk in a bit more complicated way:

    – Limit the size of baskets so that you are protected against black swans (but this limit shouldn’t be reached very often). RobinVOL statistics are much better without this limit but I kept it on default settings anyway.

    – And understand risk in terms of statistics. The risk/reward of baskets follows a Gaussian curve: . So losing a big basket is not very problematic looking at the big picture.

    The worst case scenario can be calculated in my opinion the usual way, which is evaluating monthly returns as classes and then performing the MC simulations (I dissagree here with Curtis and that led us to extremely interesting conversations).

    I have always been very cautious and always avoided to talk about my personal EAs in Asirikuy forum as it would seem I want to sell anything there which is not the case at all. If you are interested in talking about any concept on it, just open a thread and I will gladly participate.

  3. McDuck says:

    Just a silly idea related to the amount of information. If it is measured by the number or ticks, which is possible questionable, why not adimensionalize time?. The idea would be to consider a bar as a set of ‘N’ ticks, instead of ‘N’ seconds. Faster markets would lead to lower timeframes, slower markets to longer time bars.

    Best. Santiago

    • Fd says:

      Hi Santiago,

      there are indeed charting programs / trading platforms which allow for tick based bars. In fact it is a reduction of a continuous process to a discrete process. This will have some impact on applicable indicators and calculations in general. Both forms are interchangeable to some degree, that means theoretically you can convert each into the other (comp. random walk vs. Brownian motion). It is therefore not obvious for me why tick based bars should give an advantage when trying to infer future movements.

      Best regards,

  4. Dz says:


    First of all Fernando, I would say that RobinVol is one of the most good-made commercial EAs I ever seen – excellent job indeed. But still there is something about unrealistic estimations in its backtesting.

    I once have done done simple SQL script (posted to Asirikuy forum) for DD tool which combines all pyramiding orders into one single order and than have done MC simulations with such combined orders. And results were definitely different – for RobinVol Risk=1 instead of 15% drawdown it easily became 50%. This was of course more to pyramiding than M15, but practically shows how mistakes in logic of things we do, affects what we expect. And in real test on myfxbook there are also some special periods like Aug 2 – curve is too quickly going down in my mind…

    So I would vote that there is much more risk in pyramiding than M15.


  5. Fernando says:

    I don’t understand why would you want to combine all pyramiding orders into one order in this EA.

    Orders are independent between themselves. All new trades are generated with it’s corresponding signal (so all trades had a + math expectancy). And each one has it’s own TP and SL that does not depend on other concurrent trades. There is a second strategy (which is not an independent strategy) that just increases the lot size of S1 positions on favorable situations.

    On August 2 there was a 6% drawdown on a very adverse basket, but quite far from it’s worst historical drawdown and it’s WC scenario, so I consider it normal. In fact, we are near reaching new equity highs now.

    Anyway, what is the problem on doing a Montecarlo analysis based on monthly outcomes as we do with all our EAs (including the ones that do concurrent trades)?

  6. Dz says:

    I think that:

    1) opening additional positions increases total market exposure in a special way, not like for ordinary one-open-order strategies. Thus standard Monte-Carlo logic can’t be used, it need to have correction in algorithm to take into account such ‘special’ situations.

    2) even despite U claim that each position is opened by its own signal, they are de-facto not totally statistically random in nature and form some kind of chains or patterns which themselves form statistical anomaly far away from pure random distribution.

    This is main logical flaws I see, and however I can be right or not, there is something to think about here.

    So, after thinking about this, I decided to test what effect it may have for statistics. Rewriting Monte-Carlo simulator logic is a huge and time consuming task, which I don’t want to do of course, and I decided to take a shortcut – if we keep Monte-Carlo logic untouched, but let it think that those chains of orders are a single order. Thus we can have more reliable idea of our market exposure and where it could lead. At least to test this as a concept and see if there is a big difference for chains or not. Thats why idea of combining the orders come from – as a shortcut not to rewrite Monte-Carlo simulator. And after testing the difference became HUGE.

    I think this is because of market exposure which we don’t realize in pyramiding approach. And that is practically what I mean by Aug 2,- the curve go down ‘extremely quickly’ and only with only 2-3 chains. There is something wrong in here and reminds me of scalper systems or martingales. I do belive that for sound system U need to have a long way down to reach Monte-Carlo, and if U can reach half of Monte-Carlo (6% of 15%) within several hours, U can reach full Monte-Carlo some next time.

    Something like this.

    And again I don’t said that I’m right in here,- to be 100% right U need to rewrite Monte-Carlo simulator logic and do it in a right way (which is a very clever and complicated task itself). For me this quick study only prooved that there is a huge risk here and it need to be thorougly studied more properly if someone is interested.


    P.S. If use combining of orders on martingale curve before wipe-out (which is extreme version of pyramiding), and do with it Monte-Carlo, it shows exatly what everybody knows it is doing – wiping the account.

  7. Fernando says:

    Obviously, any system that chains a lot of consecutive losing trades will reach it’s worst case scenario very quickly. The speed is somewhat proportional to the trading frequency. Nothing can be done if this extreme scenario comes (on any EA) as this is exactly what is it for.

    I don’t agree with the whole reasoning of grouping trades in baskets. You are comparing things with 20 pips risk with things with 500 pips risk.

    As long as the system trades more or less the same along all it’s history, it is much more accurate to just run a montecarlo analysis over weekly or monthly yields. This is generic and valid for any kind of homogeneous data.

    Teyacanani and Ayotl -which pyramids- for example, will tend to trade more during volatile markets (it’s signals will be more close between themselves), and that doesn’t make their MC analysis invalid as your units are monthly yields.

  8. Jeremy says:

    Great topic and conversation, let me first say that Robin Vol is the best breakout trading EA currently out there so far. But…
    Trades chained in this fashion have a correlation due to the fact that they are trading the same breakout event. In the case of Robin Vol it becomes more complicated to calculate because they have different scaling exit points i.e. so trades 2,3,4 can lose whilst trade 1 closes in profit, but it doesn’t change the fact that a market exposure exists at some points with a large basket of trades that could all fail on an strong converse movement. So you only need a few bad breakouts in a row (breakout event that is strong, starts 5 trades then price action whipsaws in the opposite direction) for the trader to hit a significant drawdown inferred by Fd.

    This style of trading also has the danger to accidentally curve fit back test results. This is because a small percentage of baskets (a basket in this case being trades linked to the same breakout event) can have a disproportionate and significant statistical significance on overall results. To explain, you only need to avoid several “bad” 5 trade baskets in optimization/design to significantly doctor results. This problem is also compounded due to the sample size only being 12 odd years.

  9. Dz says:

    One idea of using lower timeframes is increasing robustness and decreasing broker dependency on higher time frames by avoiding borderline signals.

    If we have 2 brokers trading same higher timeframe system, say 1H, we can have sometimes different trades triggered because of slight candle (and thus indi) variations on borderline parameter values. This is common knowledge.

    So, if we had a system that checks for signal twice (or may be even more) say instead of 12:00-13:00 candle it checkes 11:45-12:45 AND 12:00-13:00 candles and if BOTH have signal that trade is open. And this system is designed as 1H not 15Min. First check at ..:45 is needed to confirm that main sygnal is strong enough.

  10. Dz says:

    There is also good free chapter from book in common language about clustering and its cosequences on risk/statistical calcualtions:

  11. Scalptastic says:

    DZ / Fernando,

    regarding Montecarlo on the individual trades vs baskets; I dont see a problem of doing either; and am with Fernando about doing it without grouping the positions.

    after all; both MC and Bootstrap methods perform sampling from the positions; and scramble them to get the worse case scenarios.

    I beleive the Bootstrap method will fare better in this case than MC.

    btw: Bootstrap and MC are both achieve the same result; except MC is samping without replacement where Bootstrap is sampling with replacement. The most pessimistic test will be Bootstrap.

    I played a lot with both methods; and I must say that MC did not do any good; because of its “sampling with no replacement”. Some researchers solve that problem by using block sampling. Why complicate matters; if you can get a pessimistic test straight away; using sampling with replacement.

Leave a Reply