Now that I seem to be back in the writing groove, I feel like it’s time to commit with some intentionality to how I want to learn and grow my own skills, and what I want to have shared through writing or producing another way, this year.
I aim to continue blogging my working-out-loud case study of producing an analytics strategy for the Casey-Fink Survey, from which I hope readers pick up on how I break down big tasks into small tasks when it comes to managing unknowns (in general), and gets a sense of the kind of work that’s involved with doing analytics with the intention of machine learning and more downstream. For me, this reflection benefits how I think through the project as part of a team and it enables me to be more of a guide than a guy with a flashlight and a checklist of what’s supposed to happen.
As part of that work now has plans for developing an xAPI Profile for the competencies, learning objectives and other content and materials that can be referenced in the survey, I plan to use this as an opportunity to walk through (probably as we get to the work in real-time) how to architect, let alone author, an xAPI Profile with the ADL xAPI Profile Server.
From there, I have two topics I want to branch into this year, as time and interest permit. One interesting topic will be the role that a machine-readable xAPI Profile has in making information available to other enterprise data tools, like Collibra (for enterprise data reference) and Snowflake (for data storage and real-time processing of relational and object-based data operations). Another will be the role the human-readable document that accompanies the json-ld profile plays in collaborating across different data stakeholders.
Sprinkled in between this large, interconnected set of topical arcs, I hope to also get into some how-to on how to approach more interesting (and more complicated) analytics questions. For example, most producers of eLearning express someone completing a piece of content. In my day-job work at Elsevier, the Transition-to-Practice product expresses completion in this way, because I know that I have to deliver on common expectations of what “completion” means. But I also have to educate customers and stakeholders on other ways things can be thought of as completion that might be more important — like “completing” demonstrations of competency towards a specific skill given all the attempts our system has for it. That requires tagging user interactions and processing those interactions to make an assertion that learning has occurred. This is not-obvious, multi-step work, which is likely why no one is writing about how to do this yet, but I aim to change that in 2023.
There are multiple reasons why I feel compelled to do this. For starters, I’m doing the work and I feel kinda alone in doing this level/quality/effort of this work, so I’m hoping to find (or help) develop the peers I need to take learning analytics work to other levels. To that end, on Mondays, weekly, from 4:30-5:30pm Eastern, I’ve been maintaining a salon of sorts through the IEEE P9274.2.1 Communications Office — a subgroup of the xAPI Profiles Standards Working Group, and basically we’re working on defining a set of competencies for working with xAPI Profiles (that also addresses what you need to know about xAPI), but for the most part it’s a “what are y’all working on?” conversation with me and other seasoned technologists weighing in to help with whoever has a question or a puzzle to solve.
All of this activity is set for the year. My stretch goal is that I’d like to take my writing to come for all this activity and publish it in a more concentrated application — maybe a book? maybe a new cohort experience? maybe a tool? — something that focuses on what one needs to know, beyond basics, to make use of tools and enterprise resources, to get out of the learning box and into a enterprise information systems box.
My question for y’all is… would this be helpful? Is there something else that’s a challenge with learning analytics, or xAPI, that no one is talking about?