This is the fourth in a how-to real-time case-study related to Analytics Strategy for Surveys-at-Work: Casey-Fink. In my last post, Slicing and Dicing Data Model Work-to-be-Done, I identified two work-streams for the team to pursue with our analytics strategy, which for all intents and purposes is our integration strategy between a survey being developed as a cmi5 data provider and reporting we’ll design and build to answer a bunch of questions.
Modeling-Up with Statements
The first work-stream in that last blog post focused on identifying the different ways cmi.interactions might be expressed (at least a minimal set that supports choice, likert and fill-in question/interaction item-types). Having discussed it as a team, there was sufficient interest in being able to have a data model of the survey itself as a sort of schema — a control against the data that informs a report, from an individual or set of users, so it becomes obvious to know which questions were skipped, if at all.
Up until this discussion, I was not really planning on using xAPI Profiles for this project because technically we had workarounds to go without it, but my colleague made the case — for a whole bunch of “what went wrong?” questions it would be good to have a machine-readable mapping of the survey architecture — question item IDs (I’m guessing) and their hierarchy, in terms of what is delivered to the learner.
...whoa! I've not thought of composing a profile in real-time before, but for the purposes of having a true "this is what the learner was presented with" this would at least be *more accurate* and promote the content experience the user has as being authoritative. Fascinating...
Modeling-Down with Profiles
Anyway, if we’re to compose a hierarchy of survey items (some might say a “pattern” of “concepts” in xAPI Profiles vernacular) then we may as well make all the other semantic associations we want there, too. There’s pretty low-hanging fruit to be had in terms of easy first-wins for a Casey-Fink Survey integration with our product. For starters, the professional skills content and a user’s many interactions with it is already mapped to competencies that align with at least two of the dimensions of the survey. One whole section of the survey itself is focused on levels of confidence with particular clinical skill areas — and the day-job has a WHOLE LIBRARY of content that’s semantically mapped already — a few more relationships with the correct identifiers makes this work easy in the background.
Since those associations need to be made, there’s two ways to do it — one is to do it all from the content (which was how I was thinking we’d do it, thinking about it on my own); the other is to do it on the back-end, which makes version control of that hierarchy and those semantic relationships something that can be managed independently of the survey (the content).
I finally get to eat all my own dogfood at work and I look forward to the hard learning of the savings it maybe gets us as I look ahead to working with Snowflake, Collabra and other enterprise tools. In the here-and-now, though, my colleague identified important work for us to do in the near-term: a human-readable document that lists everything we want to know from this survey (like, say, a more digestible “human readable” xAPI Profile)