Is your Forex data full of holes? A script for the evaluation of Forex data quality using Python

The first battle that algorithmic retail Forex traders have to face is the acquisition of high quality historical Forex market data. This is not very easy to do since the Forex market does not have a centralized exchange and therefore historical data can come from a wide variety of different data providers, some of which can be terribly unreliable. Currently there are no easy-to-use tools that I am aware of that can help the retail trader streamline the process of assessing data quality, making it quite difficult for the novice trader to actually know if the data they are using is or is not reliable. On today’s post I am going to share with you a basic python script that new traders can use to evaluate the quality of data in the MT4 history center format. Through this post we’ll learn what the script does, how it does it and how you can use it to know whether you Forex data looks or not like a piece of Swiss cheese.

Before using the script above please make sure you have the matplotlib, pandas and numpy python libraries installed, you can install them using the “pip install libraryname” command if you are using one of the latest versions of the python interpreter. This script uses python 2.7.x so don’t try to run it with the 3.x versions. After you get everything installed you’ll need to get your historical data into a format that the data quality processing script can read. The script reads data in MT4 history center format so if you’re using this platform you can simply export your data from the history center which will generate a csv file that the script can use. If you have data in another format you can change line 82 (that contains the call to the pandas read_csv parser) to specify the correct data format and date parser requirements.

Once you have your historical data in the correct format, all python libraries installed and you have saved the script to your computer – as for example – you can use the script by calling the command “python -f dataFilename -tf timeframe”. The script requires to arguments, the “-f” argument that specified the data filename you want to check and the “-tf” parameter that specifies the timeframe of your data. This timeframe needs to be specified according to the pandas resampler conventions (which I also summarize above). This means that if you are using 15 minute data (which is traditionally 15M) The actual timeframe you need to input would be “-tf 15T” because in pandas the convention for minute is “T” and not “M” (M is used for monthly data). If you have 1H data you can use “-tf 1H”.

What the script does is simply to load your data into a pandas dataframe, perform a resampling using the same timeframe as the data (which fills it with all the time stamps it should have if it was complete, with null values where you have missing data), filter week-ends and holidays (Dev 24, 25, 26 and Jan 01,02) and then check to see how many bars are missing and plot the distribution of when the data is missing in terms of months and years. After the script executes you will get a small summary telling you the total number of expected data points, the number of missing bars and the percentage that these bars represent from the overall data.




I have taken a 1H data sets from the data we use at Asirikuy (which has no missing bars) and have used the pandas sampling function (df.sample(frac=0.95)) to randomly strip out 5% of the data to show you how the expected output should look like (see above images). As you can see above the script correctly detects that 5% of the data is missing, which represents 5363 bars from the 107125 that were expected by the program. The script then plots the monthly and yearly distributions of how this data null data is distributed, which shows us that the sampling function has indeed removed 5% of data randomly from the entire data set.

Using the above script anyone using data in the MT4 format can very easily assess an important part of their data quality. However it is important to note that missing bars are only one component of data quality as other things such as spikes can also play a fundamental role in determining how high the quality of your data is. It is also possible that a data provider would remove missing data by doing some sort of interpolation or replacement (for example replacing missing bars with the bar right before it) which may also make you think that your data quality is higher than what it really is. It is also very important to consider that missing bars are normal under lower timeframes. When you have something like a 1M timeframe you would indeed expect to have some percentage of data missing because the market is not active on absolutely every minute. Around 1-2% of the data can be missing due to this fact (low liquidity that means no trading around 1-2% of the time). However on timeframes above 10M the amount of bars missing should in fact be zero for a high quality data set.

If you would like to learn more about data analysis and obtain long term Forex data that you can use for your own system design and analysis please consider joining, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading.

Print Friendly, PDF & Email
You can leave a response, or trackback from your own site.

Leave a Reply

WordPress › Error

The site is experiencing technical difficulties.