Remember me

Register  |   Lost password?

mikeohara's Blog: Algorithmic Trading

Guarding Against a Future Flash Crash

May 12, 2015 by mikeohara   Comments (0)

Written by Jason Mochine, Commercial Director at Fixnetix. 
The technology is available but until we use it global markets remain at the mercy of the financial hounds.

On the 6th May 2010 the Dow Jones Industrial Average index nose-dived by 6 per cent in just a few minutes. With no more warning, the Dow recovered nearly all of its earlier losses, leaving traders and regulators scratching their heads.
Navinder Singh Sarao is thought to have played a key role in this Flash Crash. Dubbed the “Hound of Hounslow” - a less voracious breed of canine than Wall Street’s infamous “wolf” - Sarao allegedly created an extreme order book imbalance in the Chicago Mercantile Exchange futures market to turn a profit.

Through a process known as “layering”, Mr Sarao is thought to have placed a high number of electronic orders to sell futures contracts. The orders, visible to other traders, would have indicated a high level of market desire to sell, causing contract prices to fall. It is thought that Mr Sarao then cancelled the orders forcing prices to rebound and recover rapidly. He would have made a profit by buying contracts when they were artificially low only to sell them when the price bounced back. The practice is illegal because market rules state that orders must be made in good faith and with the intention to complete. According to US authorities, the Hound made almost $1m on the day of the Flash Crash, and as much as $40m more through market manipulation over the next four years.

Notwithstanding potential ill-gotten gains or the extent of culpability, the case highlights just how vulnerable markets can be to manipulation and reveals how regulatory bodies are blinkered to the use of technology when it comes to tackling rogue traders.

In Canada, the rules implemented by the Investment Industry Regulatory Organisation (IIROC) and the Canadian Securities Administrators (CSA) around pre-trade risk checks are there precisely to combat the likelihood of system abuse. The regulators stipulated that all parties providing direct market access to their clients were subject to these new rules without exception and this approach has been implemented for a number of years. Whilst the rules were not specifically designed to combat layering, they easily could be as they are designed to police many other factors. When the technology is readily available and proven in action, why aren’t we hearing about the adoption of enforcement technology rather than calls to ban the technology that allows for orders to be fired off at great speed?

Fixnetix for example has iX-eCute, an FPGA microchip currently conducting over 70 different checks on key factors such as who is placing orders, their authorisation to do so and whether or not orders are being issued within the agreed price parameters. Sitting between the broker and the exchange, it processes every order and rejects those that fall outside the agreed rules in-flight, no matter how fast a trading system can submit those orders. Thus, the rogue order never reaches the exchange. Fixnetix isn’t the only organisation to provide such technology but for some jurisdictions it appears that the effort to put the technical genie back in the bottle is less than understanding how to embrace technology to protect markets.

By overlooking the power of tried and tested technology, regulatory bodies are electing to hamstring their attempts to catch those people or organisations looking to create market disturbances. This technology’s value is recognised and its success is proven. How long before we decide to wake up to the tools at our disposal so that the hounds stay collared?

, , , , , , , , , , , , , , , , , , , , , , , ,

Pillar #2 of Market Surveillance 2.0: Past, Present and Predictive Analysis

March 5, 2015 by mikeohara   Comments (0)

By Theo Hildyard
In the second of a blog series outlining the Seven Pillars of Market Surveillance, we investigate Pillar #2 which emphasizes support for combined historical, real-time & predictive monitoring.
Following my last blog outlining the Pillar #1 in the Seven Pillars of Market Surveillance 2.0 – a convergent threat system, which integrates previously siloed systems such as risk and surveillance – we continue to look into the foundations of the next-generation of market surveillance and risk systems. 
Called Market Surveillance 2.0, the next generation will act as a kind of crystal ball; able to look into the future to see the early warning signs of unwanted behaviors and alerting managers or triggering business rules to ward off crises. By spotting the patterns that could lead to fraud, market abuse or technical error, we may be able to prevent a repeat of the recent financial markets scandals, such as the Libor fixing and manipulation of the Foreign Exchange benchmark
This is the goal of Market Surveillance 2.0 – to enable banks and regulators to identify anomalous behaviors before they impact the market. Pillar #2 involves using a combination of historical, real-time and predictive analysis tools to achieve this capability.
Historical analysis means you can find out about and analyze things after they’ve happened – maybe weeks or even months later. Real-time analysis means you find out about something as it happens – meaning you can act quickly to mitigate consequences. Continuous predictive analysis means you can extrapolate what has happened so far to predict that something might be about to happen – and prevent it! 
For example, consider a trading algorithm that has gone “wild.” Under normal circumstances you can be monitoring the algorithm’s operating parameters, which might include what instruments are traded, the size and frequency of orders, order-to-trade ratio etc. This data comes from historical analysis.
Then, if you detect that the algorithm has suddenly started trading outside of the “norm,” e.g. placing a high volume of orders far more frequently than usual without pause (a la Knight Capital), then it might be time to block the orders from hitting the market. This is real-time analysis and it means that actions can be taken before they go too far and impact the market. This can save your business or your reputation. 
If the trading algorithm dips in and around the norm, behaving correctly most of the time but verging on abnormal more times than you deem safe, you can use predictive analytics to shut it down if this happens too often. In other words, you can predict when your algo might be verging on “going rogue” if it trades unusually high volumes for a microsecond, then goes back to normal, then again trades too high volumes. 
The trick is to monitor all three types of data to ascertain if your algo was, is or might be spinning out of control. An out of control algo can – and has – bankrupt a trading firm. 
Adding Pillar #2 to Pillar #1 gives you complete visibility across all of your siloed systems, data and processes while monitoring for events that form patterns in real-time or compared with history in order to predict problems. 

, , , , , , , , , , , , , , ,

Thinking of FPGAs for Trading? Think Again!

February 6, 2015 by mikeohara   Comments (0)

In the competitive world of automated trading, deterministic and ultra-low latency performance is critical to business success. The optimum software and hardware design to achieve such performance shifts over time as technologies advance, market structure changes, and business pressures intensify.


, , , , , , , , , , , , , , , , , ,