Certification Best Practice
Every installation of our automated onboarding software comes with the task of uploading certification scenarios. So we've seen a lot of them!
The test suites vary quite considerably in quality, completeness (with overly exhaustive being just as bad as very thin coverage) and ease.
At FixSpec, we've developed five “golden rules” for creating good certification scenarios:
QA is where you should test your system to ensure it works as expected, and not client conformance.
Any test designed to deliberately trigger a rejection by your system should be examined critically to understand why it appears in client conformance.
Our rationale here is that a client in 100% conformance with your API should never experience rejections anyway, so you are not testing a realistic scenario but instead QA-testing that your own system rejects in some artificial circumstances.
QA’ing your customer’s system is their job. We would strongly recommend against scenarios designed to create rejects (or exhaustively test permutations) simply to “test that the customer can handle something”.
Such scenarios are often inconclusive and inefficient tests. Not only are you exposed to (unseen) error handling processes within the customer’s EMS/OMS, but it also acknowledges that there must be some inefficient, manual, off-line conversation with the customer about whether it worked from their perspective.
Scenarios should be discrete – and preferably very small – in scope. To avoid a massive test suite (unwieldy for you, and frustrating for your customers), break scenarios into simple, small, tasks which can be tested independently and discretely.
Also, try to avoid “pre-condition” steps which are already covered as part of a different test. For example, if customers can subscribe and unsubscribe from market data, then have one scenario for subscribing, and one for unsubscribing (without the subscribe pre-condition steps).
It’s tempting to build a test for every permutation of field combination, but this quickly becomes frustrating for customers and has rapidly diminishing value. Instead, separate field values which are largely independent of other fields (eg order side) versus those which trigger other intra-message field requirements (eg limit orders have a mandatory price). Scenarios should focus on testing the latter, with independent fields slotted in wherever possible.
For example, instead of creating a buy and a sell test, create a limit vs market order test and make one a buy and one a sell.
(OK, so this one is only applicable to users of our certification platform, as functional views are something unique to us).
Functional views put a technical message within a particular business context. For example, a Limit or VWAP are different business contexts for a new order message and can, therefore, be represented as functional views. They give you the ability to “clean-up” the message accordingly, removing fields and/or values which are not relevant in that context or changing descriptions.
Central automatically applies functional view conditions in scenarios, saving a lot of time and reduces errors involved in re-entering the same conditions again and again. It also gives you (and your customers) a much clearer understanding of a piece of functionality on which to build your tests.
Designing good certification tests is more of an art than a science; they get better with time and experience. If you would like help improving your certification test suite, then get in touch - we'd be happy to help.