One of the things I’m trying to establish now that the team is working with xAPI on more planning and architecture things is way more up-front testing. You might’ve noticed in my last post and previously, I’ve made reference to wanting to test stuff against CATAPULT, as our infrastructure relies on cmi5. I wrote a little bit too fast for my brain, though. CATAPULT is one framework which tests your content and/or your system for cmi5 conformance. CATAPULT doesn’t handle things like xAPI statements by themselves, so given that I need to think more clearly, so I can write more clearly, it merits today’s post on the different tools that exist for testing different things related to xAPI.
This will not be an exhaustive list (at least to start). The following are tools, code libraries or otherwise that a) I know about; and b) I’m thinking about using in project work. If you’re doing this type of work and use similar tools, please leave a comment or msg me and let me know what you’re using? I’d welcome any opportunity to help spread better practices around testing our learning engineering work.
Validating xAPI Statements
There are several validators that will allow you to paste or type in an xAPI statement and validate it against the xAPI specification itself, but the ones that are easiest to use and most likely to be kept up-to-date are:
There’s also a library that will allow you to test an xAPI Statement against a given xAPI Profile.
For the work we’re doing currently to model new statements that support different cmi.interactions, and with the work we’re doing to develop a schema of the Casey-Fink survey such that it has extra validation rules to support a way to restrict things depending on how users answer a survey item; and the semantic associations we’ll want to make between survey items and other concepts (content, competencies, objectives, etc), we’ll definitely rely on Persephone above to test our drafted statements for new cmi.interactions against the cmi5 profile.
Validating xAPI Profiles
However, since we’re going to be drafting an xAPI application profile for the Casey-Fink survey, or at least, how we’re doing it. That means we need to make sure the json-ld profile we create is legit. Fortunately now there are at least two ways to test an xAPI Profile.
When it gets time to actually construct the json-ld, I think we’ll do the authoring in the ADL xAPI Profile Server, but test it (and build workflow potentially) around the Pan code, once we can shake out what the differences are in their testing results… which no one ever wants to see, but I kinda expect because this work is still a craft. The standard parts work enough now to make the craft work worth doing, imho.