Effective Connectivity Series #3: Writing Good Certification Tests

FixSpec 20th May 2019
8 min read

Best Practice Onboarding Efficiency Series

This post is primarily written for readers what are part of the onboarding team who are either upgrading existing customers to new API functionality or conforming new customers to your trading service or market data platform.

While this type of technical connectivity testing goes on all day, every day in banks, vendors and trading venues across the globe, it never ceases to surprise me just how little guidance or best practice is out there to help teams build and run efficient testing processes. It seems that the only option available to firms with a conformance requirement is to hire FIX experts who have done this job somewhere else before and can use their knowledge to cobble together a process that just about works (in truth, this is often achieved by simply copying the process they used in their previous job).

Over time, this replication approach has naturally led to a high degree of similarity between firms (a good thing), but since this is replication rather than evolution, opportunities to carefully consider improvements are passed over (a bad thing).

This blog series is trying to examine the end-to-end onboarding process and recommend improvements, while this post zooms in on one important aspect of it that we often find quite ingrained views on - how to approach writing certification tests. It turns out that this is a big, boring topic, but having worked with lots of firms who have either written or inherited (or copy/pasted!) certification scenarios, we've been able to see the patterns in what works and what doesn't work when it comes to certification and hopefully give you a head-start on how to improve yours.

1) Organise Them Well

It's a simple fact that your customers or vendor partners are unlikely to write to 100% of the functionality offered in your API. Perhaps some of it wouldn't meet their business profile or needs, or perhaps it would be too technically difficult for them to implement. So you just need to accept this as a given and plan for it.

Since the purpose of conformance testing is to only test the pieces of functionality that a customer is planning on using, you will almost always find yourself reducing the scope of certification to meet their needs. Organising certification tests into well-defined and structured groups will help both you and your customer understand which tests need to be completed and which can be skipped.

2) Tie Them To What Customers Use (And Log It!)

As we said in our Clarity Up Front post in this series, understanding what functionality customers or vendors intend to use in production is vital to starting the project on the right path.

Ideally, capturing the set of functionality from internal sales teams or customers early on will allow you to present a pre-filtered set of certification tests to customers, reducing both time and potential confusion.

Equally important is the need to centrally capture the set of functionality that the customer can support as soon as this becomes known, either from direct email or phone interactions with them or from their actual conformance. The capture tool could be as simple as an Excel file on a shared drive or could be embedded in your automated certification tool ready for future use.

Note that in this step you should be capturing functionality used rather than a list of scenarios tested/passed. The benefits of this will become obvious over time as you add tests alongside new functionality or extend the coverage of existing functionality. Working out what customers should be tested (or re-tested) for becomes easier if you have (1) a customer to functionality mapping, combined with (2) functionality to scenario mapping.

3) Make Them Small

Scenarios should be discrete – and preferably very small – in scope. To avoid a massive test suite (unwieldy for you, and frustrating for your customers), break scenarios into simple, small, tasks which can be tested independently and discretely. If you have tests with more than 2 or 3 steps in them, then consider breaking them down into smaller tests.

As a bonus, you should always try to make the tests as generic as possible. For example, try not to be overly-specific about which instrument, side, quantity or price the customers should use unless it is strictly required for the test itself. We often see firms asking for very specific values in tests in an effort to make it easier for them to identify those messages from log files. All too often this process breaks, however, if the instrument identifier changes over time or the price moves too significantly. Well-designed commercial certification software should make such specifics redundant anyway, replacing them by more intelligent, automated querying.

4) Make Them Targeted

Also, try to avoid “pre-condition” steps which are already covered as part of a different test. For example, if customers can subscribe and unsubscribe from market data, then have one scenario for subscribing, and one for unsubscribing (without the subscribe pre-condition steps).

5) Don't Make Them Impossible

When you read this one aloud it seems pretty obvious, but you would be surprised by how many times we see certification scripts where prospects are asked to send in something deliberately non-sensical just to see that "their system can handle the rejection". Examples include sending market orders with a price indicated, orders without symbols indicated, or requests for quote for swaps with historic dates. These are all examples of things that customers with correctly-designed software should not be able to enter in! So if anybody does pass this test, then you know that either (a) they aren't using their production software, or (b) their production software does not contain sufficient protections against basic, venue-agnostic issues. Both of which should be big red flags!

Testing "impossible" edge cases like this are the domain of your QA test suite - not customers. Which brings us to...

6) A Certification Test Is Not QA'ing Your System

QA is where you should test your system to ensure it works as expected, and not client conformance.

Any test designed to deliberately trigger a rejection by your system should be examined critically to understand why it appears in client conformance.

Our rationale here is that a client in 100% conformance with your API should never experience rejections anyway, so you are not testing a realistic scenario but instead QA-testing that your own system rejects in some artificial circumstances.

7) A Certification Test Is Not QA'ing Your Customers System

QA’ing your customer’s system is their job. We would strongly recommend against scenarios designed to create rejects (or exhaustively test permutations) simply to “test that the customer can handle something”.

Such scenarios are often inconclusive and inefficient tests. Not only are you exposed to (unseen) error handling processes within the customer’s EMS/OMS, but it also acknowledges that there must be some inefficient, manual, off-line conversation with the customer about whether it worked from their perspective. If you have tests like this, then spend some time thinking about HOW you get confirmation that the customer's software handled the response as expected.

8) Establish A Test Window Policy

There are a number of factors which can often cause customers to fail to complete all testing in a single day including timezone differences, availability of personnel, or simply the number of tests to be performed. Firms should probably try to accommodate these wherever possible by either breaking the certification suite up into smaller "test packs" (see (1) above) and allow firms to certify each test pack over a number of days.

At the same time, firms should formally define what we refer to as a Test Window Policy - the maximum number of consecutive days over which customers may complete various test packs or portions of functionality, such that they complete the full suite of tests within the test window. The common argument in favour of a short test window (sometimes just a single day), is that by limiting the time window you have greater confidence that the customer hasn't changed their software version or code between tests; they should, after all, be using the same version of code throughout their certification. The longer the certification takes, the more likely it is that their code has changed, and therefore the less certain you can be that the original test results remain valid.

FixSpec would advocate 1 calendar week as a good compromise here; it is short enough to limit the scope of software change while allowing enough flexibility for scheduling issues.

9) Don't Aim For Exhaustive Tests

It’s tempting to build a test for every permutation of field combination, but this quickly becomes frustrating for customers and has rapidly diminishing value. Instead, separate field values which are largely independent of other fields (eg order side) versus those which trigger other intra-message field requirements (eg limit orders have a mandatory price). Scenarios should focus on testing the latter, with independent fields slotted in wherever possible.

For example, instead of creating a buy and a sell test, create a limit vs market order test and make one a buy and one a sell.

10) Make The Process Self-Service (If Possible)

The phrase "self-service" in the context of certification often has two meanings, which are important to clearly differentiate:

  1. Customers can get on with the task of testing themselves at a time that suits them, without needing a book a "test slot" or have a member of staff on the phone to guide them through the process step by step. Let's call this "self-service".

  2. Customers can fully certify themselves without ever needing to interact with a member of staff. Let's call this "self-certify".

I'll go into more details in a follow-up post, but for now, I'll simply sum up our perspective on this as follows; the journey towards greater automated around (FIX) connectivity is just that - a journey. Anyone who believes that they can skip directly from current manual processes directly to full customer self-certification without any interaction from staff is deluding themselves. Not because it isn't possible - it is - but because firms don't realise that in order to achieve it, they need to put a series of foundations in place that they simply don't have today.

FixSpec helps firms of all sizes undertake this journey to greater automation, but the first goal for all firms should be "self-service" before "self-certify". Practically, this means:

  1. Allowing customers regular access to a test environment that reasonable mirrors production,
  2. Sufficient API documentation to allow them to undertake development work without needing to continually call your staff,
  3. Sufficient certification information to allow them to understand what is expected of them without needing to book a meeting with your staff, and
  4. A feedback mechanism by which customers can tell you that they are ready for a certification review or struggling with a particular test.

We hope these tips help your firm improve your certification process. Remember that designing good certification tests is more of an art than a science; they get better with time and experience. If you would like help improving your certification test suite, then please drop us a line or DM us at @FixSpec - we are happy to help.

Find This Useful?

To receive more tips for improving efficiency in YOUR connectivity process, sign up to our FREE monthly email newsletter.

Awesome - thanks for signing up. You can unsubscribe at any time by simply dropping us a line.