We have just released a version 0.3 of our FixSpec schema which introduces support for state-transitioning workflows.
Workflows are a crucial, yet underestimated part of specifications. They are a powerful source of information for developer and business analysts to understand the client/server interactions and the intricate relationship between technical messages.
And they are not only useful for ROE readers. They can be a great source of efficiency for ROE publishers. Here is a couple of direct potential applications for automation:
Generation of diagrams: state diagrams naturally fit the suggested model but its use can certainly be extended to sequential diagrams which are more traditionally used in our industry.
Creation of test cases: the aimed structure should holds all the pieces of information required to programmatically derive test cases for quality insurance and client certification purposes.
For this new structure, we have followed the principles of Finite State Machine (FSM) model. It allows the description of the following objects:
|States:||Definition of all the possible states in the life of the business element you are modelling: order ,quote, trade report etc... You can precise if this state is initial, final or transitional.|
|Transitions:||They are defined by the following variables: start state(s), events (messages with field conditions triggering the transition) and resulting outputs (responses with end states).|
You will find more documentation finspec.io. We have included some sample (including diagram) to help you get familiar with it.
This initial version is merely a starting point for a larger discussion opened to everyone interested in contributing to this project. So please go check it out and give us your feedback: how can we make this more useful to you, to our community?
Almost month has passed since we open-sourced FinSpec Schema 0.1, our vision of what we believe multi-protocol, electronic specifications for financial services should look like. If you still haven't checked it out, head over to https://github.com/finspec/finspec-spec.
Thanks for the overwhelming reception, all the encouragement, suggestions, and most importantly contributions. Suggestions based on real-world examples of specs in a variety of protocols have been most valuable as we enrich and mature the schema, so please keep them coming.
We've been busy working on version 0.2 of the schema, which is now live. Here's a summary:
|General||Added ability to specify charset, whether API is binary or text, and default endianness of a spec.|
|Data types||Instead of having predefined and growing list of field datatypes, and forcing API consumers to deal with datatypes that may not be useful for their API, we've added the ability to define the custom datatypes. In addition to basic type, you have more control to define length, precision, sign, regular expression pattern, padding and much more for the datatype.|
|Common blocks||Added commonBlocks section to define blocks that are common across all messages, such as header and footer blocks.|
|Fields||We've added more controls to define a range (minValue - maxValue) for numeric fields, specify required regex patterns, as well as the ability to capture conditions specific to given field (see below).|
|Field conditions||We hope to finally free API authors from simplistic “is it required? (Yes or No)” columns, to a world where fields can be conditionally required (or absent) when other conditions are true about the message. It's now simple and easy to explain that Price is a mandatory field for Limit orders, but shouldn't appear for Market orders. Check out the documentation for more.|
So please check out the version 0.2 schema, documentation, and examples on github and see how you can use it. Remember that the license allows you to freely use this schema for internal purposes, or to publish to clients.
Create github issues or drop us a line at email@example.com, if you need help or have any feedback or suggestion. Let's all discuss, contribute, and promote to take the schema to next level!
Last year I wrote a blog post concerning a proposal for an electronic format for FIX specifications (or ROEs) known as FIX Service Profile.
Almost 8 months have passed us by and the only visible progress appears to be a name change to FIX Orchestra. (I hope to learn about more substantial progress at the EMEA FIX conference in London tomorrow). My previous post presented a number of challenges with the proposed format, not least the FIX-specific nature of it, and overcoming the chicken-and-egg problem that may hinder adoption.
FixSpec believe that electronic ROEs are the future, and that the industry can't afford to wait for lengthy FIX Trading Community committees to agree. So today, we have open-sourced OUR vision of what what we believe multi-protocol, electonic specifications for financial services should look like. You can find the schema and documentation at https://github.com/finspec/finspec-spec.
ROEs are coded as simple JSON files using the schema. We chose JSON because of it's simplicity, the fact that it doesn't require specialist software to view or edit it, it's widespread support in almost all programming languages, and the fact that it plays nicely with web APIs. There are also a wide variety of free tools to simply and easily difference two JSON documents, so ROE comparison becomes a breeze.
Just because it is JSON doesn't mean this is unstructured however -- this is where the FinSpec schema comes in. The schema (itself a JSON document) describes how a specification must be laid out, and what fields and content are required for the document to "validate". It is much like DTDs in XML. So once you have created your ROE document, you can validate it against the FinSpec schema to ensure that the recipient will be able to parse the document. There are a variety of free tools to do that, or simply grab our node.js validator from Github.
By the way, the json-schema has a whole host of other advantages, which I won't go into here but will blog more about as the schema develops.
The initial schema supports simple descriptions of messages, fields and values. It can also document contacts, contain message examples, and is extensible to allow custom content.
We are already working on v0.2, which allows the capture of both field conditionality and state-transitioning workflow - more details to follow shortly.
Well, partially because we really want this schema to remain protocol-agnostic and not tied to (or overly-influenced by) FIX. But mostly this is about evolving the schema in a much faster, more transparent way. Your firm doesn't have to be a FIX Trading Community member to contribute, you don't have to be "nominated" to sit on a committee, and you don't have to sit through endless conference calls. We are inviting everybody to contribute to something that we hope will make everybody's lives easier.
Today we release just version 0.1 of this schema. With your help we can quickly make it better and more useful for everybody. Please don't sit on your hands and wait until version 2.x before investigating how this can benefit your firm -- get in on the ground floor and help shape the schema to solve your real-life issues.
Drop us a line at firstname.lastname@example.org or send us an issue on Github, and let's build this schema together.
One of the most surprising facts I have encountered in our little niche market of financial services onboarding and certification, is just how much attitudes vary towards it.
Certification is the "driving test" that counterparties are required to perform to demonstrate compliance with a trading or market data interface. In a world of continually changing software and algorithms, how often should counterparties be put through their paces?
Some exchanges don't require much formal certification at all (e.g. Deutsche Boerse), some require certification only upon initial application (e.g. NASDAQ), and some demand regular re-certification (e.g. London Stock Exchange who ask all members to re-certify twice per year). Brokers typically bend over backwards to actively avoid re-certifying their customers, as anyone who has ever seen algos labeled CUSTOM1, CUSTOM2 and CUSTOM3 will readily attest.
This general lack of re-certification seems to be completely off the radar of most compliance teams and regulators - a seemingly (large) blind spot given the continual tidal wave of regulations requiring increased testing and monitoring of electronic trading systems.
If you accept the premise that more frequent certification can only reduce the likelihood of things going wrong (it's very hard to argue that it would increase risk), why don't firms certify more? The answer is very obvious - it takes too much time and effort to set-up, schedule and administer.
What if there was a better way? Enter the idea of "continual certification" - a process which runs production data through certification to see if people are still compliant. After all, if you demonstrate you can enter, amend and delete orders in production every day, why do you need to demonstrate it again in a special time slot?
A good real-world analogy for continual certification is "Pay How You Drive" (PHYD) car insurance. Newly-qualified drivers typically face steep premiums because - as a profile group - they are seen as more risky than experienced drivers. Under a PHYD scheme, however, new drivers can install a telematics box in their car that tracks and reports on their real-world driving (i.e. production), effectively subjecting them to a continual driving test that the insurer can review and adjusting their risk and premiums accordingly.
Not only would continual certification check that counterparties are correctly interfacing with your application, but it can also highlight areas where clients are doing things that they are not certified to do (blind spot number 2).
For the sake of minimising initial certification effort, it's very common to remove certain maneuvers from the certification pack. For example if my algo box can only send out limit orders, why do I need to demonstrate any other order type? It makes perfect sense, but where does that information go, and how is it actually enforced in production? We know that some brokers and venues do have controls and procedures in place, but it is far from universal and the job of reviewing logs to check clients are still behaving the same way is... well a lot of effort. How would you feel if you had a car accident with someone who was only qualified to drive a motorbike?
When done correctly, not only does continual certification reduce overall risk in the trading ecosystem but it also yields new business intelligence insights into exactly how people are interacting on your API, so it ticks the big data buzzword box too.
Continual certification has the potential to significantly reduce risk while minimizing the administrative burden of recertification (especially when coupled with a self-certification portal giving clients a view into their own activity), which is why I believe it will become a major trend in the next 3 years.
Over the past few months I've been talking with people on both sides of the Atlantic - exchanges, brokers and ISVs - about the emerging concept of electronic specifications in financial services, and the FIX Trading Community's proposed "FIX Service Profile" in particular.
It's important to begin with words of support for the initiative; the idea of electronic specifications as a way to reduce the massive inefficiencies that currently plague the task of making and maintaining connectivity is spot-on. It is the idea we've been championing at FixSpec for almost three years (and indeed our founding vision), so wider recognition of the underlying problem and initiative is certainly welcome.
In a recent Tabb video though I hinted that the FIX Service Profile may not be the panacea that some are hoping for. Many people contacted me after watching the video to understand more, so I wanted to use this blog to share some of the very consistent feedback I've heard both in recent months and indeed since FixSpec's launch in late 2012.
The first - and biggest - challenge is the classic "chicken and egg" problem.
Brokers and exchanges won't invest their precious time to convert their Word or PDF documents into an electronic format unless there is demand from customers (or their ISVs) who can consume it. I've heard all of the usual predictions of major FIX Trading Community firms being obliged / coerced / forced to create and distribute in this format, but unfortunately I simply don't believe them. It didn't happen for FIXatdl. It didn't happen for FIX Repository. So why is this time different?
On the flip side, ISVs and brokers have no incentive to change their systems to process a new format until there is a critical mass of specs available in that format. Discussions with major ISVs indicate that most already have internal, well-established, XML-based schemas to configure gateways. So why would they take on the effort and risk of moving to a format which arguably erodes their competitive position? They won't. The most likely alternative response is to follow the FIXatdl path; hack together an internal conversion tool to map the "formal" schema into your internal schema and then continue to use that.
ASIDE: FixSpec's response to this current market reality, is to stop promoting a one-size-fits-all approach and instead work with ISVs to generate proprietary XML exports directly out of our repository. This approach means that electronic specs are accessible to ISVs today.
The second challenge is scope, and the clue is right there in name - this is only about FIX. But the simple fact is that most exchanges in the world today don't exclusively offer FIX for order entry, and most offer market data in some format other than FIX FAST. So restricting to FIX means the initiative is only half of a solution for exchanges, and actually isn't actually that useful for brokers who rarely have a "fixed" FIX interface anyway (more on that in a later post).
Financial services connectivity (especially pre-trade) always has been - and always will be - multi-protocol, and it is time that we as an industry started to recognise that and build formats, products and services that are multi-protocol from birth.
In a previous job at a large ISV, I found it interesting to compare the tasks of developers building gateways for order entry verses those building market data gateways. Sure, market data APIs change far less frequently than order APIs, but think about the core infrastructure required to even begin the task - there are no consistent "standards" to refer to, no handy online decoders, and no open-source projects to help seed an architecture. This simple observation means that a "Market Data Service Profile" would be of even higher value than one restricted to one order-entry protocol.
So where does this leave us? My sad prediction is that FIX Service Profile will follow the well-trodden path of other formats before it, failing to achieve traction and being consigned to become yet another dusty PDF a website somewhere. The problem isn't the absence of a format, it's the environment and the approach itself.
It's time for a radically new approach, and I hope to share some of our thinking on these topics in coming posts.