Welcome!

Blog Feed Post

Building a Back Testing Platform for Algorithmic Trading

In this series, I’m going to outline in general, how to build a back-testing platform for the creation, tweaking, and subsequent execution of algorithms used in electronic trading.

Part One – The Data

I recently made some comments on Vertica’s blog in regards to what I considered to be a fairly bold claim.  They said that Vertica was the only real column store.  But even if they are, so what? In my comments, I alluded to my belief that we optimize problems to solutions – we try to fix stuff using what we’ve got in our toolbox without having to run to Home Depot.

The real test is when the rubber hits the road – how do you actually solve a problem in a new way that’s motivating.  And by motivating I mean the solution addresses the issues, enables new capabilities, and is economically attractive.

So rather than tell you that DarkStar and our approach to processing both real-time and historical data (there’s a difference?) is the Real Enchilada, I thought I would illustrate a real world use case.

Let’s say you want to store a bunch of market data.  And I mean a bunch.  You want to store every piece of market data for the whole US Equities market.

And you’d like to have this data so that you can run analytics on it.  Or maybe even back-test strategies for buying and selling stocks.  So let’s assume that you’ve got some java code lying around to do that.

For our example, we are interested in seeing whether or not using volume average weighted price strategies actually work.   In our example, we will pretend that we are buying a lot of stock, and the theory we want test is whether or not buying that stock during the day when it’s lower than it’s weighted average price will give us a better average price during the day than just going with the flow (often referred to as volume participation).

We are all familiar with how relational databases work, and anyone who’s been in capital markets for a while knows how futile it would be to use something like Oracle for this due to cost, hardware and just how difficult it is to get the data into the database in the first place,

Oh that’s right, I forgot to tell you, we are going to have to load this data first.

I am not going to go into the relevant benefits of a column store here either, you can check out many other websites for that.

Instead, let’s look at some issues.  First, I would rather load the data directly into the database as it happens.  Staging the data separately is costly and error prone. In addition, what happens when you decide to load that data and encounter a problem that can’t be fixed in time for the next market day?  What if you actually run out of space or compute to get caught up?  Well then you can’t back test the next day to further refine your algorithms.  Algo’s should be tweaked every day.  New algo’s need to be developed to remain competitive.  So here, a database error costs real money.

So I need a fast data store.

As I am loading the data, what happens if one of my disks goes boom? Or one of my machines go boom? Well, now I have a problem.  If I fail over to another datacenter, how do i reconcile? What a nightmare!

So I need a data store that we can take a sledgehammer to and it will keep running.

Hey, if I have this big historical data store, I still need to query it while it is being updated.  Ideally, I would like to also be running analysis and back testing during the day.  Scheduling jobs to run at night is so very ’90′s.

So my data store has to facilitate both interactive query and batch analysis.

But wait, doing all of this means that I am going to have to figure out how to use the same code for back-testing that i use to generate orders during market hours.  It’s either that or use some visual, script based or different harnesses for my java or C++ code.  Yet another nightmare.

So, I would Iike to run the same code against my historical data store that I also use to generate orders during the day.

There’s a bunch of other stuff too, management, instrumentation, removing old data that I don’t need for back-testing, all the stuff we associate with normal day to day big data operations. We need to know what’s going on during the day so that we can be proactive.  There’s gold in that data!

And one last thing, it would be really cool if most of this technology wasn’t proprietary.  I mean let’s face it, firms that talk more about their investors on their websites than their clients can’t possibly have my best interests at heart.

This is a tall list.  Let’s knock it down, one by one.

Here is a diagram for your consideration.

The diagram isn’t very technical, and that’s on purpose – I’m outlining an algorithm, or methodology that may or may not solve our problem.

In the diagram, I’ve depicted the database as a cluster of machines.  Instead of using one big machine backed by a SAN, I’m going to use a number of machines.  Each of those machines is going to connect to the Market Data source and get data.

As we receive the data, we’re going to take a peek at it, and determine where in the cluster that data needs to live and while we’re doing that, we’re going to right it to disk.  A background process will make sure that the data ends up on the node we want it on.  More of why that’s so incredibly important in Part Two – Analyzing the Data in this series.

Also, I’m going to ask the cluster to replicate everything we’re writing to it – we’re going to end up writing the data a total of 2 times in this example.  I might usually suggest 3, but we’ve got two data centers running the same solution, so I’ll actually have 4 copies of the data.

Why write the data to three nodes in the cluster?  First, if a node goes down, I still want to be able to write data.  If the node that goes down is the primary node, I’m going to remember that and when that node comes back up, I’m going to write all the data to it as part of its “Welcome Back to the Cluster Party!”  And second, if I’m reading data from the cluster (remember, we’ve got algo’s running and users querying this data), I want my data.  If a node goes down, your users don’t really care – they just want their data.  By replicating the data across multiple nodes, I achieve high availability without having to fail-over to another instance or data center.

Ok, we’ve got the Sledge Hammer test handled, which is cool, but everything I’ve described above sounds like it’s going to take a lot of time and that the system is going to very slow.

Not true.  Each node in the cluster above is subscribing to market data.  So if one machine can ingest X messages per second, then a cluster of 10 machines should be able to ingest 10 * X messages per second, right?  Let see what that means in a real world example:

On May 20, 2010, there were about 1.1 billion BBO messages as published via SuperFeed (NYSE’s market data platform), those quotes represent the best bid, bid size, offer, and offer size for each stock at any given time.  In terms of messages per second, that’s about 50,000.  In terms of size per second, that’s about 4,500k per second.  Hmmm, chunky!

These are intimidating numbers.  But if we divide the problem up a bit, and use 10 nodes in a cluster, each node only needs to ingest about 450k per second across 5,000 messages.  All of a sudden, we’re dealing with something quite reasonable.

So, now we’ve got a cluster that can load the entire market real time and it’s redundant.  What about analyzing the data?  That’s in Part Two – Analyzing the Data which I’ll post next week.

Print

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to [email protected]

Latest Stories
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
In a world where the internet rules all, where 94% of business buyers conduct online research, and where e-commerce sales are poised to fall between $427 billion and $443 billion by the end of this year, we think it's safe to say that your website is a vital part of your business strategy. Whether you're a B2B company, a local business, or an e-commerce site, digital presence is key to maintain in your drive towards success. Digital Performance will take priority in 2018 for the following reason...
At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
I think DevOps is now a rambunctious teenager - it's starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.