With my recent research in reinforcement learning I have been asking myself questions about system development, especially questions about how the ideal trading system should work. After a lot of analysis I have come up with the conclusion that orders like stops and limits (stoploss and takeprofits) would never be needed by an ideal automated trading algorithm and that the only reason why we use them is mainly because regular trading algorithms are by nature sub-optimal. Today I want to talk about how an ideal trading algorithm should work and why the introduction of additional order closing mechanics is simply not needed once you develop such an algorithm.
To understand how an ideal algorithm would work it is first important to ask ourselves about the decisions that can be made in the market and what this means in the development of a trading system. In almost all markets you can only do three things at each point in time. You can either go long, go short or stay on the sidelines. Even a perfect algorithm might decide to stay on the sidelines sometimes as there might be cases where the algorithm wouldn’t see a point in getting into the market as it wouldn’t be able to extract profit beyond trading costs. However it is clear that these are the only three decisions you can make when trading in the market and any other decision could be translated into this array of possibilities.
An ideal trading system would look at the market as often as possible and it would simply decide whether – according to current market conditions – it is best to be short, long or on the sidelines. The introduction of a stoploss or takeprofit would not improve such an algorithm as it would simply make a decision that was ideal at each given point in time. The algorithm acts as a human trader in the sense that it constantly evaluates the market and decides what course of action might be best, since this evaluation is constant there is absolutely no need for the algorithm to use stoploss or takeprofit mechanics, these only become necessary when an algorithm is very limited in its scope and therefore must establish boundaries for the very limited set of market conditions it can evaluate.
Regular trading strategies that use traditional trading signals suffer from the problem of “market blindness”. This means they do not have the tools to constantly evaluate the market and therefore they need to use stop and limit orders to control the profit/loss they obtain from each signal. This happens simply because there is no constant logic-based evaluation of the market but a simple BUY/SELL logic that uses some signal. However there is no information beyond this simple signal and the system needs to rely on sub-optimal solutions to capture market gains. It’s like being in a dark room with limited information about your surroundings and some simple rules to navigate, this might be enough to avoid bumping into every wall, but it’s certainly not equal to having a flashlight.
What I really like about reinforcement learning is that it approaches trading in a manner that emulates what an ideal algorithm would do. Instead of having simple signals and relying on SL/TP mechanics you have a constant picture of the market – you know the state of the market at each point in time – and you just use this information to make trading decisions without ever having to need an SL and/or TP. Since reinforcement learning aims to learn “what to do” under a very varied number of circumstances you will create algorithms that simply always know what to do based on their past experience. You will always get a BUY/SELL/OUT signal from the market which will eliminate the need to implement this type of exits.
The above is one of the reasons why I am so excited about the development of trading strategies that tackle the market as a finite state machine. Learning to trade the markets as if it was a computer game seems to make intuitive sense and should lead to algorithms that are more adaptive and less likely to fail under changing market conditions. If you would like to learn more about reinforcement learning and how we are implementing systems for trading and learning using this approach please consider joining Asirikuy.com, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading.strategies.