The tl;dr: Like hardware, when it comes to data, if you can’t open it, you don’t own it.
I was recently asked to respond to a question, “What were the objectives of the Experience API?”
- Enable data interoperability across systems.
- Enable people to own their data.
- Surface the evidence linking learning and performance.
- Lower the barrier to entry and remove the obstacles to creating and improving learning experiences.
- Encourage innovation, diversity and economic opportunity within (and outside of) the learning technology community.
- Promote citizenship, agency and autonomy for individuals in a digital world.
This shouldn’t be a surprise. I’ve been tooting this horn for years. AFAIC, if we’re not aiming toward these things (ALL the things), then we’re missing the point.
Now, normally, it takes years for anyone to use a spec like what we’ve created. None of us standards nerds are used to the market so quickly adopting what we create. We usually have all this quiet time in the shadows and in relative obscurity to discuss, debate and eventually figure out how to turn what we wrote into what we meant. So… good job team: we standards nerds knocked out something that industry is super excited about and, in some respects, ready for.
This success comes with a sharp edge: it’s exactly the meteoric adoption of what the spec is today that will slow down getting to everything else on the above list. So, this post is to call out to the collective navel gazing in the spec community. Conversations keep focusing solely on the #1 above, data interoperability, as if it is independent and free of any larger context. That’s absolutely not the case, and continuing on the path we’re currently on will keep adoption from growing, and limit the true impact of what we’ve begun to do.
We’re maybe 70% of the way there with data interoperability. I know that closing the gap to fully meeting the whole list of objectives above will be really tough. If you buy into the Upper Limit Hypothesis, the road ahead will take more energy and more cooperation than it’s taken to get where we are now. I bring this up because we will only drive ourselves as a community into the same ruts we did in previous spec efforts without addressing data interoperability in (at least) its next larger context: data ownership.
I was, shall we say, “eloquent” in my post from September highlighting the way forward for IEEE to standardize the Experience API. Admittedly, I could’ve made it even more focused, which is what I’m trying to do now.
Looking at what Megan and I are implementing with xAPI, I have to call out where, in practice, the tools conforming to the spec aren’t helping with data ownership. What I can see now as an implementer, that I couldn’t see from inside of ADL, is that to realize data interoperability part of the work will be improvements to the spec; the more important work we’ll need to do, as an industry, is improve how we commonly implement xAPI as vendors and consultants, how we call out when that practice is or isn’t happening, and how we onboard new tools and toolmakers into the ecosystem we should be creating.
Vendor lock-in happens as LRSs make data LESS accessible to work with, perpetuating the scenario LMS vendors implemented with SCORM. Adopting the Experience API is supposed distribute power and opportunity — not create a new hegemony.
So, respective of my listed objectives, here’s where things stand.
Enable data interoperability across systems.
In order to ensure data interoperability across systems, we need third-party tests to evaluate a tool’s and an activity provider’s conformance to the specification. There’s a group hosted by ADL that is working towards this. That’s exactly as it should be. It’s never fast enough, but it’s at least happening.
people organizations to own their data.
Where we’re going to fall short in these conformance tests is the difference in what the spec literally states and what we do in implementation in our human attempts to translate those instructions into implementation. When it comes to data ownership, there’s a ton of ambiguity around what that means. A LOT of things have to happen in order to see that one through. There have been attempts to help individuals directly (and firstly) own their data. Ben Betts at HT2 attempted to go down this path with Learning Locker originally as a Personal Data Locker, only to find there wasn’t a market for it. He rightfully pivoted on a realization that for people to own their own data, organizations needed to own their own data first. With the Learning Locker LRS, HT2 is now giving organizations full access and ownership (in the maker sense) of the most popular open-source LRS on the market, and the means for people to access the data that’s collected there in more direct means than the Statement API. Other LRS vendors are following suit. This can’t happen fast enough and to be absolutely fair, everyone in this field is new at this. Activity Providers could be a lot more transparent about what interactions they’re capable of tracking. The use cases are only starting to translate into case studies. Again, though… this can’t happen fast enough.
Surface the evidence linking learning and performance.
It can’t happen fast enough because to surface the evidence between learning and performance, one needs to make sense of the intentional capture of learning experience, in the form of what Megan & I will call learning informatics. Analytics might well be how you find patterns in tracking… whatever… but informatics are produced because the design is deliberate in wanting to tell you how well a particular intention is being realized. To do this, people are designing patterns of interactions within (and across) content items to be tracked with xAPI. To get tailored reporting of these interaction patterns, on a regular basis, the most pragmatic way isn’t through querying the Statement API.. it’s direct access to the data, preferably in a replicated datasource so that you’re not impacting the performance of your database or LRS. But if organizations can’t access their data directly, they can’t do this effectively.
Lower the barrier to entry and remove the obstacles to creating and improving learning experiences.
If organizations can’t readily make use of the data that’s being captured in learning record stores, then they can’t use that data effectively to improve learning experiences, let alone evaluate their assumptions about how learning impacts performance. This creates unintended barriers to entry for doing informatics work, as it forces the use of general purpose approaches that get you sorta-kinda what you wanted, but not exactly what you need.
Encourage innovation, diversity and economic opportunity within (and outside of) the learning technology community.
When the market is only sorta-kinda getting what they want and confused by the rhetoric (and let’s be brutally honest: the confusion perpetuated by some people about what the technology is even called), that’s not encouraging newcomers to get into exploring new niches of tools to add to the ecosystem we all started making with the Experience API. Intentionally or not (I’m assuming not), the ways in which we’re not making the data in the LRSs more accessible to work with is forcing a reliance on vendor lock-in — basically perpetuating the exact same scenario presented by SCORM with LMSs. This is the very thing we should be trying to avoid. Admittedly, some vendors thrived because of this in SCORM. The intent, as clearly presented many times and again above, is that the Experience API was supposed distribute power and opportunity — not create new hegemony through vendor lock-in.
Promote citizenship, agency and autonomy for individuals in a digital world.
And that’s just so we can maybe empower organizations to own their own data… meaning they can control how their data gets used… they know what data is actually being tracked within their firewalls… they can be accountable and have agency to use that data to make themselves better organizations…
We need to empower organizations to own their data so individuals can own their data, too.
We need to empower organizations to own their data first so that so we can one day empower individuals in-kind. The spec is mute on how that data is made available for use outside of the Statement API, and in practice, the Statement API is maybe just fine for sharing data across systems, but it’s not super helpful when you’re trying to use the data in the LRS to inform.
Whether that’s something to address in the specification or it’s for us all to establish as common practice — this is a conversation I’d like to be part of and hope for the betterment of adopters that we all do, and that we all establish some common expectations.
We must strive to get data ownership right for organizations, so that the practice and the tools mature for individuals (read: consumers) to have meaningful and sufficient data worth owning, and enough tangible reasons to own it.