Wednesday, October 16, 2013

The challenges of high-frequency data

It has been nearly 20 years since the publication of some of the most influential research in trade and quote data.  The past seven years have seen an almost unbelievable growth in the number of quotes and a large increase in the number of transactions on major exchanges.  

This video shows 10 seconds of trading of BlackBerry Ltd on October 2.  This flurry of activity was attributed (ex-post) to rumors of a second private equity suitor.   The flying objects are both trades (circles) and quotes (triangles) generated by participants in one particular market which are then transmitted to the other 10 exchanges pictured.  The NBBO is pictured at the 6 o’clock position.

With a span of 10 seconds, I would suspect that most of limit orders were completely computer generated. It is also clear that the orders are being placed so quickly on different exchanges that the traditional practice of trade signing using the Lee & Ready algorithm cannot be relied upon. 

The video was produced by Nanex, a specialist low-latency data provider.

2 comments:

  1. So who is working on the next gen Lee & Ready?

    ReplyDelete
  2. A weighted average of Lee and Ready; Ellis, Michaely, and O'Hara; and, a tick test helps a lot. I beat the next-best method by 1%-2% and Lee and Ready by 7% in some cases. For odd-lot orders trading near the midpoint, the wrong delay can make Lee and Ready guess incorrectly 99.4% of the time. While the delay parameters I estimated have surely changed, I also suspect that those delay models (to estimate prevailing quotes) are needed more than ever.

    More here (in JFEC, natch): http://jfec.oxfordjournals.org/content/10/2/390.short

    Dale

    ReplyDelete