Backtesting Trading Systems in Python: Not a really good choice

The python language is extremely versatile, easy to use and convenient. There is no discussion about the versatility of this language when it comes to the amount of time that it takes to put a usable idea into code. Want to load a csv file? Want to perform an indicator calculation? Want to plot a set of values? All these things can be done with just a couple of lines of code in python while it would certainly take entire pages if not thousands of lines to do the same things in languages like C and Fortran. However python has several weaknesses that make it a poor choice when it comes to back-testing trading strategies, particularly when it comes to event based back-testing. In this post I will go through the different things that make python a bad choice for coding back-testing engines and why – despite the much longer coding time – the benefits of using a lower level language probably far out-weight the problems for certain types of testing. To perform the test below please download the code here.

In general it’s important to understand that there are two main ways in which you can do back-testing. The first is what’s called vector based back-testing and the second is called event-based back-testing. When you do vector based back-tests you calculate vectors that represent trading decisions and you then do vector operations to extract performance from these variables. Say you want to back-test a moving average cross-over strategy the vectorial way, you first calculate a vector with all the moving average values, you then create a second vector that contains a boolean with whether the moving average is greater or smaller than price, you then use these values to calculate a vector representing equity according to where you have signals, etc. Vector based back-testing in general does everything by calculating vectors and these vectors are then used to generate your testing results. It is a mathematically efficient way to perform back-testing of certain types of systems.

There are however many disadvantages to using vector based tests (which I will leave to discuss on a future post), which leads many people to the alternative, which are event based tests. In event based back-testing you loop through the available trading data and you pass your algorithm all the information it has available to it at each point in time. This is the way of back-testing that most closely matches real market execution because your strategy is just doing the same thing, it’s receiving data and making decisions on each time unit when it has to. For this reason event based back-tests can test all strategies that could be traded in the market and algorithms coded for event based back-testing can generally be used without any modifications to trade live, because the mechanics are simply the same. In event based back-testing you do an explicit mock run of your strategy through your data as your strategy would have done in live trading (or at least as close as you can manage).

Selection_769

If you want to code an event-based back-testing engine in python you’ll face some serious problems due to python’s very nature. You may have decided to use python because coding within this language is very easy but you will soon find out that this comes at a great cost. If you want to perform a simple data loading plus event based testing exercise you will probably be using some code as the one showed in the example above. This example loads data from a file called TEST_60.csv (30 year randomly generated 1H data) and then performs a simple loop through the entire pandas dataframe to calculate the average 20 bar range on each bar (something extremely simple).  Doing this simple exercise takes about 12-15 seconds to load the data into a pandas dataframe – mostly due to the date parsing – and then several minutes to perform the looping exercise. It is extremely slow to loop through a pandas dataframe because libraries like pandas are simply not designed to perform this type of task, they are designed for vector based operations which are optimized within C based functions within the library.

When you use libraries like pandas or numpy the cost of looping is actually much larger than the cost of looping through a simple python list, that is because these libraries have rather inefficient functions for accessing single elements within their objects because this type of operation is not what the libraries were designed for. Pandas dataframes and numpy arrays are not meant to be iterated through, they are meant to be used to perform vector based operations (that is the “pythonic” thing to do). You can perform some tests and see how greatly your time changes when you change the function used to access values within the pandas dataframe, if you change from ix to iat or iloc you will notice some important differences in execution times (see here for more on indexing method performance). Using a library like pandas or numpy is great in terms of the amount of coding time saved but if you’re doing event based back-testing you will never have something fast enough.

Selection_768

The cost of performing this sort of looping in python renders the language practically useless for any large scale back-testing project that requires event-based testing. The above coded 1H bar loop takes several minutes to run and it’s not even making any highly demanding calculations, it is not even tracking equity, trades or doing any signal generation. This is all because looping through pandas objects is tremendously slow. Sure, we could make it faster if we didn’t use pandas for this or if we used ctypes instead but then you’re moving into the territory of low level languages already. You are giving up something that is tremendously friendly to code with (pandas) for something that is faster (ctypes). If you’re willing to increase your coding time to gain speed then you are better off simply going to a lower level language. If you’re spending 10x the time making python code faster then just spend that time coding it in C where you’ll know it will be as fast as possible.

Of course I am not arguing that there is no place for python in back-testing (after all we coded an open source time series analysis library in python called qqpat). You can perform somewhat fast simple vector based tests using this language and if you’re willing to give up the most easy-to-use libraries you can probably code something much faster using ctypes and speed it up even further using something like pypy. However the best use that I have found for python is actually to use it as a frontend for much faster back-testing libraries coded in C/C++. In our community we use python to do things like load configurations, generate graphs and load csv files while a much more efficient C library performs the actual event based back-testing. Doing this we can perform entire 30 year backtests on 1H bars in a matter of seconds while doing this in python using easy-to-use libraries like pandas would most likely take 100 times the time, if not longer. It is no mystery then why there are simply to commercial event based back-testing programs that use python, it’s simply not a language cut for this job.

If you would like to learn more about back-testing and how you too can code and test strategies using our C/C++ programming framework please consider joining Asirikuy.com, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading.

Print Friendly, PDF & Email
You can leave a response, or trackback from your own site.

3 Responses to “Backtesting Trading Systems in Python: Not a really good choice”

  1. PatternMatching says:

    I think that a few minor adjustments to your code will result in a significant speed up and may ultimately make Python a bit more acceptable.

    1) print is quite expensive. If you would like to print out the data contained in average_range at particular intervals, you can do so after populating it.

    2) Consider preallocating an empty NumPy array:

    >> n_elements = len(range(2, len(main_rates.index)) * 20
    >> average_range = np.empty(n_elements)

    >> average_range[(i-2)*20 + j] = range_value

    Just by doing this, I get down to total execution time of 0.033 sec on dual Intel Xeon E5-2620’s. I’m in 64-bit IPython using NumPy 1.10 and Pandas 0.17.1, for what it’s worth.

    • admin says:

      Hi PatternMatching,

      Thanks for posting! Really nice improvement, clearly the print function was just there for illustrative purpose (I just wanted users to see what the function was doing) but nice job on reducing time by pre-allocating the numpy array. Of course there are all sorts of things that you can do to make the code faster in python — I am definitely not saying it cannot be done, especially in specific cases like this one. However I believe there is still a valid point in that to gain acceptable performance in Python you need to give up a good part of the “coding friendliness” that makes it such an attractive language to start with. When your code becomes really complex – like if you want to do machine learning – modifications like the one you posted become harder and harder to get to. In the end to get to execution times like those of C/C++ you may end up spending time as if you were coding in these lower level languages.

      What do you think? Do you believe this is the case? Do you think there is always a doable optimization that might make a python code reach a C/C++ like performance without too much effort? Any python tips you would like to share? Of course I don’t have last word on python so any eye-openers are definitely welcome! Let me know and thanks a lot for your contribution,

      Best Regards,

      Daniel

      • PatternMatching says:

        I think that generally speaking, if the operation is not vectorized, getting ‘closer to the metal’ by using cython (or something of the like) will be optimal and you’re living in C/C++ world.

        That said, I’ve had great results by using Numba (a just-in-time compilation library that plays pretty well with NumPy) to speed up ARMA’s and other non-vectorized computation. For more on it, see:

        http://numba.pydata.org/numba-doc/0.24.0/index.html
        http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html#jit

        There are all sorts of customizations and optimizations available to you, but simply decorating an isolated numerical loop, for example, with @jit seems to be quite performant in most cases.

Leave a Reply

Subscribe to RSS Feed Follow me on Twitter!
Show Buttons
Hide Buttons