Among things I didn’t list out in My xAPI Analytics Goals for 2023, was how I’m working mindfully to coach, not consult, my team. I want our individual contributors to gain the skills and the hard learnings of working with xAPI so they can grow their skills and mature our practices independent of my spoon-feeding information when it finally seems relevant (to me). As a result, I get questions and feedback that helps me more effectively understand how, at a team level, I can help expedite things and clear paths for people to learn, experiment and grow — while also gaining some clarity around the challenges with xAPI and xAPI Profiles, specifically, that we can address through IEEE at the standards level.
My very capable tech lead asked me yesterday, “When we add the functionality of the reports into our new reports app, one way to handle that in a way that is more tenable not so specific to our content. Basically what data do we have through our current content vs what we might expect for say if we made some other content using xAPI in DominKnow or whatever and how those various sources of xAPI could be visualized or at least compared / cross referenced… tl;dr: how do we do a generic xAPI report from a data perspective?”
My guess is that others have this question, so I’ll share what I shared in response.
A generic xAPI report is probably useless outside of the technical exercise of making that happen as a Proof-of-Concept.
However, known reporting patterns for eLearning makes some sense.
I generally dislike general reports for eLearning. Because the LMSs and other tools that do this don’t talk about things like learning objectives or competencies or progress towards skill acquisition, they pretty-up the adminstrata — all the metrics that can be easily counted on from SCORM (session length in time, completion/success status, scores). With xAPI we do more and internally, we can propose and gain consensus and commitment on internal standards and ways we do things…
So one possible output, building on cmi5, is a general eLearning profile I could architect and it would give our content stakeholders a map of what they have to track, at a minimum, with their eLearning authoring out of DominKnow.
One very simple rule is that cmi.interactions in any eLearning are tagged to the learning objective(s) or competency(/ies) in a similar way to how we’ve done it with our existing content. We can create rules for that in an xAPI Profile that, eventually, can be read, understood with appropriate rules translation into authoring environments. That way it’s built into the authoring workflow, but governed by the xAPI Profile so the rules translate dynamically into system (authoring tool) behaviors that impact how a content author uses the authoring tool (to tag learning interactions).
xAPI Profiles Help Us Stop Re-inventing Wheels
I want to thank longtime friend and colleague Jason Haag of Veracity for pointing out a brand new xAPI Profile that’s part of the Navy’s xAPI Library — a Performance Assessment profile that has a LOT of overlap with the use cases we have for our Analytics Strategy for Surveys-at-Work: Casey-Fink. When wrote up my goals last month, a main reason for blogging again was to make sure as we pioneer getting stuff done with xAPI here, I want us to be in lockstep with the better practices of my colleagues. I’m so happy Jason’s been following the blogging and chimed in. That saved the team hours of work.
If you don’t know about the Navy’s xAPI Library, you really should. Veracity among other vendors have been working with the US Navy, specifically the Naval Education and Training Command (NETC) to make working with xAPI easier, at scale, through a consolidated web portal for acquisition language (for contracts), xAPI Profiles and common code libraries to ease/accelerate development and quality engineering so things all *just work* as turn-key as possible.
Reporting Strategy?
Much like what the Navy has done, it makes some sense to start attempting generic reporting approaches related to modalities — how is the learning experience delivered? Well, the following might be how our team ends up starting to get into a new approach for scaling our reporting.
- eLearning
- Videos and non-interactive Animations
- Surveys
- Journaling
And then once we have some operating agreements among content authors/contributors to the platform about how we tag user interactions when they “answer” or “respond” to our media, with profiles doing the heavy lifting for linking the interactions in context with some semantic meaning, we can then get into, potentially, somewhat generalized or templated approaches to reporting stuff like…
- progress towards competency/skill attainment
- learning experience bounce
- clinical and professional skill maturation
The bounce report is something I’m interested in modeling internally and for y’all out here in the world. Where do people drop off in a curricular program? Where do they lose the motivation to keep going at a pace? We’re authoring content with fairly complicated decision trees that provide psychometric feedback that helps pinpoint where a learner is confused, in terms of subject matter, but to improve the learning experience of that content, it would help to know just where in the decision tree(s) learners end up, so we de-complicate questions that people get lost on, and maybe even increase difficulty, change decision options, on questions that are maybe too simple and impact the motivation to stick with the harder questions.
What do YOU think?
Related
One response to “Data Strategy Starts with Reporting Strategy”
Thanks for this and all the work you do 🙂
YES to bounce reporting! Yes to all the things that a webmaster would expect to see from their web analytics. If we had that baseline down, it would be much easier to communicate additional value to the partners we need to pull any of this stuff off. The fact that we don’t meet their basic expectations yet raises suspicion and makes everything a harder sell than it needs to be, in my experience.