Standard Options Apply

This post is for vendors, developers and implementers of the Experience API and for anyone who may one day create something with xAPI. The choices made soon will ultimately impact what you will be able to do with xAPI 3, 5 and 20 years from now.

I’m going to lay out options for how we can approach standardization and the reasoning behind them. I’m going to ask you as vendors, entrepreneurs, developers, designers from all around the world to provide me with your counsel as I ultimately must direct the effort. I ask you to support your interests with your participation as we go forward.

This is a long read. There’s nuance that’s not easily summarized. Please make the time to read, comment below, post responses that link to this statement.

In August, 2014 I participated in one of the IEEE Learning Technology Standards Committee (LTSC) monthly meetings. This particular one was special, as we were formally reviewing the Project Authorization Request (PAR) for the Experience API to become a formal standards project. Once the request is made and approved by IEEE’s New Standards Committee, we begin the last leg of a journey that started with friends riding roller coasters in Cedar Point amusement park in Sandusky, OH back in 2009.

The PAR wasn’t approved by the LTSC in that August meeting. It wasn’t the slam dunk I was naively hoping it would be. There were questions raised by the committee that may have easy responses, but the easy responses I can share aren’t necessarily the better responses we need.

This has me reflecting deeply about what kind of future with xAPI we need to enable. So here goes.

We (as citizens of the world) generally need better responses to the tiny events that color the big picture. The last couple of months, looking at the recent events in the US and around the world, looking at our own work with and in organizations that are dealing with a stagnant economy, looking at ourselves… looking at myself…  it’s so desirable to do the easiest or simplest thing in any given scenario. It’s impossible to figure out the best thing to do, because the future is filled with rabbit holes and we never can go down all of them.

We must be mindful of our options and deliberate about our choices.

When we’re talking about xAPI, we must appreciate that there are already millions of dollars invested (in the value of people’s contributed thoughts, their time and actual capital) in development and adoption. However, we have to also be mindful of the billions of dollars to be invested in xAPI going forward.

If SCORM taught us anything, it was these two things:

  • First, it taught us how to make real money in learning technology by formalizing how we commonly approach enterprise software;
  • Second, it taught us how costly it is to not be mindful or deliberate about our choices, technical and political, at the specification and standards level.

I can feel some of you bristling already about the focus I have on the financial perspective. My perspective is this, and I can’t say it strongly enough: there’s no way we can make real change in learning, education and training without it being financially viable. Money is what makes things happen. I feel a responsibility to make sure xAPI is designed well enough to encourage the investments that make the promises of the spec actionable.

As an industry, we’ve gotten this far, this fast, with xAPI, and must continue to do so, precisely because people can find ways to profit from their investments in sweat, time and capital AND make the world easier to learn from. I want to make it as easy as possible for people to innovate and solve real-world problems with this specification. I want to encourage it by keeping it as open as we can AND by making it possible for the best approaches, not just the best spin, to find adoption.

We’re on the verge of something. We can take this open specification and transform it into an international standard that will catalyze data interoperability across systems. Done well, this enables people to “own” their data, promoting citizenship and personal autonomy in a world that’s more and more digital. Or… we just take this open specification as it is, and try and keep the scope to simply transposing it for standardization, ensuring that adoption years from now will look pretty similar to what it looks like today… which looks exactly like SCORM.

As the leader of this standards effort, I want to hear what you have to say. I want to consider diverse opinions and insights that don’t come from within my echo chamber. In the end, I will ultimately make the decision about the scope of the standards project.

These are the rabbit holes and trying to go down them all, repeatedly, is exhausting.

Consider Breaking Up the Spec Into Separate Standards Efforts

Some in the LTSC are very familiar with the European Union’s policies on privacy, security, data ownership and the rights of individuals in digital spaces. In response to their concerns about “tracking,” which rightly furrows eyebrows and adds wrinkles prematurely to us all, a suggestion that gained momentum was that we consider breaking up xAPI into three separate standards efforts — three different documents to be linked together. Doing so would make it possible to isolate the areas of the existing spec that cause concern. This approach has some advantages that I’ll expand on.

Think about this like we think about WiFi.WiFi” is essentially a set of IEEE standards — its project number is 802.11. There are different forms of WiFi that co-exist — 802.11a, 802.11b, 802.11g, 802.11n… Each does a slightly different thing, but altogether any/all of these are “WiFi.” This is the frame to consider for the Experience API. “xAPI” will have its own project number (1484.xx) and it would look like this:

1484.xx.a – A standard for the Data Model would describe how xAPI statements are formatted. This would remove the need to necessarily use the Statement API or have a Learning Record Store to store data in statement format. Since the data model can be applied generally, it means that there are lots of ways statements can be used, which would encourage more adoption by lowering the barrier to entry, which (in turn) could influence a lot more activity providers. You may ask, “Why would someone only want to use the data model?”

Real use-case: One current “adopter” of xAPI is only using the data model, without an LRS. I put adopter in quotes because, according to the spec, without the LRS, he’s not conformant. Anyway, in his implementation he’s using the JSON binding for Activity Statements to track what people are doing in his software, in the context of how people use the software to accomplish specific tasks. He’s storing the statements in his own database and has no reason to share them with another system. He’s not taking in statements other than those he’s designed. He is simply using the data model to track activity in a consistent way in case one day he does need to share them, but right now there’s no reason to incur the cost of an LRS or use the Statement API.

1484.xx.b – A standard for the Statement API would then act as the means to validate statements made, whether in an LRS or not. As it is now, an LRS is really useful in concept for data transfer, but most adoption currently isn’t around sharing data across LRSs, and if you’re into doing more “big data” (or more apt “messy data”) mashups, an LRS only keeps xAPI statements. What this would then allow is the means by which any database or web application could let in or keep out statements that are junk, to use xAPI statements as a system might use any other data source. You may be asking, “Why would someone only want to use the Statement API?

Real use-case: Some of the largest educational publishers are implementing the Statement API and data model into their existing internal data storage to validate xAPI-formatted Activity Statements before accepting them into their data warehouse, along with all sorts of other data they’re tracking. They have no intention of sharing this data with any other system, and they don’t want the segregation of xAPI Statements from the other data they’re collecting. Rather, they want the xAPI data co-mingled with these other data to get a fuller analysis of how people are using their materials.

1484.xx.c – A standard for the Learning Record Store would focus on the portability of data among, the authentication and interfaces to connect various functions with other systems. Creating an LRS is the most difficult and complex part of xAPI, and its uses are scoped only to activity statements that are valid xAPI statements. Anyone who’s built an LRS themselves loathes the complexity of the work that will be involved in figuring out the privacy, security, data ownership, transport and exchange mechanisms that we’ve put off because they were too complex… but if we want real international adoption of xAPI, this will need to be addressed for the European Union. Or it won’t… and the failsafe is that the above two specifications can garner international adoption without a lot of pushback, and LRSs as they are can exist where they can.

Currently, adoption of xAPI is very LRS-centric. I personally believe that the LRS is not the most valuable part of xAPI. I enthusiastically embrace LRSs as a product category, but it’s important to remember that LRSs-as-discrete-applications was never the the intent. Rather, an LRS describes a scoped set of functionality that could be part of any app, any software, anything that reads data generated by another app or piece of software. The LRS currently is the most marketable concept people understand because we all can relate an LRS to our expectations of what a learning management system does. The key to the long-term value of standardization comes not from a spec that revolves around an LRS, but a spec that is focused on the data itself and the myriad of ways it can be exchanged. As my friend Steve Flowers put it, think about LRSs as antennae, not fortresses.

You are likely asking, “Why would anyone want to use the LRS without the Statement API or the Data Model?

Real use-case: Companies (plural) tried to build Personal Data Lockers. They wanted to make it possible to share a learner’s activity data across systems — not just keeping it inside one LRS. Rather, the intent was to have the data follow learners across systems. Rather than think of an LRS as a fortress that holds all the data, these companies were trying to follow the original vision of the LRS as antennae that send and receive data that follows the learner wherever they go. These implementations weren’t fully conformant to the spec, because sharing data according to the spec as it is… well, it is really hard. Ironically, in the two cases I’m thinking about, both companies turned their attempts at Personal Data Lockers into full LRS products.

xAPI” would then be the term that describes the general set of standards and what they enable, but each individual standard deals with something distinct, supporting the greater whole.

The hope in this approach is that the xAPI specification community has more or less held a high level of amity before, as the spec was being developed. The shine may have dulled a bit in the last year and a half as competing vendors polish their own chrome more (and some are admittedly better at it than others) but this approach may well forge new opportunities for cooperation and competition, as well as sweeten the honeypot of adoption. We need to make xAPI friendly for more adopters — starting with those who have chosen not to use a standard and built something proprietary because they couldn’t adopt only the part of our spec they needed. If we can really spur more interest and adoption, widen the possible ways in which people can adopt, every vendor participating stands to gain in a larger market. By making it easier for people to adopt specific parts of the would-be standard, we enable use cases on the fringe of our imaginations that may emerge as the strongest and most valuable use cases. By making the LRS its own standard, the things that were really difficult to address at the spec level — like how data is shared across different LRSs — would be given their due attention. By making the Data Model and the Statement API their own standards, we enable adoption for use-cases where lower barriers to entry are needed. By making the Data Model its own standard, we encourage more Activity Providers. Given how LRS-heavy adoption of xAPI is now, we need this to grow.

The risk in this approach is that it will be a freaking difficult path. It will likely break current implementations. To be honest, I don’t have a pony (read: product) in this race and breaking changes in any approach to standardization are inevitable. I don’t personally stress about that part. I worry much more about how the LTSC can manage three concurrent standards projects that must work together. It requires a lot of attention and participation, and the kind of cooperation and amity among competing interests that sometimes fails standards groups. It will take longer to create the standards this way, though some things — specifically the data model — may be able to standardize sooner.

Consider Keeping the Spec “As-Is” In One Standard

All of the above stated, it is admittedly tempting to try and keep the spec as it is, even if it narrows the spread of adoption.

One reason is that there are over 60 vendors adopting it with no interoperability issues with the data — but there are few who have tried to share data across different LRSs. Still, that’s a pretty damn significant reason on its own merit. Over a year past its release as Version 1.0, there are as many (or more) open source LRS options as there are commercial options. As I said before, the LRS isn’t really where the magic is at with xAPI, but given the framing of the specification the shape of the conversation around xAPI is the attempt to answer the question “What can an LRS do for your organization?”

To be honest, that’s a question that at least has a more immediately tangible response — an easy response. I don’t love that question, but as a pragmatist and someone who wants to see things get done and catalyze economic growth not just to make existing vendors more wealthy, but to encourage new players to compete on a level playing field so that the best products find adoption… (breathe) that’s a framing that’s focused and easy to design and develop for.

If one wants to consider that xAPI is designed to solve a fixed set of issues as a response to some current (think the last five years or so) challenges with eLearning (particularly with how we approach communication with an LMS outside of that environment), while incomplete on its own for eLearning, xAPI is an amazing success story. That we can use web services and describe a consistent (albeit imperfect) approach to handling offline activity and syncing localized activity data back to an LRS… this is a huge advancement beyond what we’ve done with SCORM — even as we acknowledge that it doesn’t replace SCORM. People still need content interoperability. xAPI is about data interoperability. They are not the same thing, and modeling our approach to data on how we approach content is tempting, but misleading.

The hope in keeping the spec together in one document is that, well… it’d be easy, right? It’s an existing spec. It works. People use it. One can argue as I have above that it’s not supposed to be about the LRS, but practically speaking, it is whatever it is. There’s plenty of room to innovate and differentiate with the specification as it exists. It may be imperfect, but it does work beyond just fixing things that we eventually figured out were really stagnating about SCORM. If we could get the scope through LTSC and IEEE’s New Standards Committee, it might only take two years and we’d have one legitimate standard that could be adopted internationally.

The risk in following this path is that we’re ignoring the opportunity to create something better. While going this path doesn’t necessarily shut down the ability for people to own their own data, or to move data around from system to system, or even to make that transfer more secure and respect privacy, we’re forever linking the components above so tightly that it will stay a closed loop. Only the learning technology community will care for this and adopt it, making it difficult for HR, Enterprise Management and ERP systems (let alone audiences we’ve never talked with who might just want to adopt the data model) because, well… it’s “learning” and it requires “all the things” in the spec to adopt. And whether you care about adoption in the EU or not, the smart money says we need to look beyond learning departments inside of enterprise. Talent is the new Learning, and if we’re to do something that finds meaningful adoption, we risk missing greener shores. And let’s not forget what happens if we need to ever revise this one document. Should we ever need to make a change — even something as simple a new transport mechanism, or even the structure of a statement — the whole spec is going to be opened for revision. It’s near impossible to to effectively manage an international standardization process that restricts scope at the document level.

The EU may not be interested in xAPI as one document that reflects the current specification because of its ambiguities around security, privacy. They may be justifiably squeamish about tracking. As Avron Barr reminded me, we’ve certainly seen with the vehement rejection of InBloom that even in the United States, we all have some concerns about the privacy and security of learning data. Certainly, though… corporate, government and military interests in Asia and Latin America may embrace the spec as it is, simply because it solves a set of very painful problems and it does that well. And… even in the case of the EU, while the standard may likely break current implementations, it’s possible to focus the accommodations for security and privacy concerns on the areas that are prone to remain stable. Still, though, the way the spec is now, it forced adopters to collect data and to make it sharable. That’s not in the best interests of every organization.

Where I Stand

I debated weighing in myself on where I stand, but for those of you on the fence, maybe it will help you wrap your head around this nuanced issue. I personally lean on the side of breaking up the spec into three standards.

My wise friend Tom King put it like this:

“This issue could be framed as a core issue of monolithic versus modular. Or perhaps framed another way– what makes a spec, any spec, good?

A monolithic approach has a few key benefits. And it seems better when there is no concern about backward compatibility and limited concern about forward flexibility. It can also help with clarity as compatibility and adoption is “all-or-nothing” for the players. As a “1 document” spec there is just one big piece to manage- likely a speed advantage if document processes offer zero parallelism– and the ‘go backs’ all happen in one larger process if changing spec’d functionality in one place impacts a different spec’d functionality.”

A modular approach has its benefits. Tom shared his thoughts about the ACID test for databases. ACID stands for atomicity, consistency, isolation and durability. These are goals that every database should strive for, and when a database fails at any one of these, it is considered to be unreliable.

Tom asked me, “In this light, what makes a standard ‘reliable?’ Does one approach favor more or fewer of the ACID elements?” A modular approach is certainly atomic; it helps to ensure there’s consistency going forward for each component; it isolates the potential impact of changing any one component without needing to change the other components; it ensures that the pieces, should they never need a change, can endure and find more and more interesting uses. Not that I think of only this litmus test, but it’s a good litmus test.

While the investment of thought, time and money that have gone into xAPI so far are significant, like Tom, I don’t know of any organization that is currently so dependent on xAPI as it exists today that their bottom-line is at significant risk by changes to the current spec or delay in standardization. Especially when I consider the long-term.

If we go with the monolithic approach, it will likely make it difficult for people to innovate beyond the initial vision. We can’t foresee all the architectural decisions that would constrain us down the road, but we know from our history with learning technology specifications that something as simple as the requirement that SCORM’s API be presented in a “web browser” crippled any natural evolution or innovation. As Tom wrote to me, “Why couldn’t someone use the non-verb-value model just for writing/storing objectives, or assessment criteria or gap analysis?” The way the spec exists today, they can’t, but it seems to me they should.

Once the standards are established, the investment and dependency on them will only increase as long as the standards are usable and useful. As Tom suggested to me, we need to adopt an approach that is both responsible and sustainable. We’re setting up a standard that, like SCORM, will impact industries for 20-30 years (at least). If we bundle too many big pieces together into one document, we’ll render the standard inflexible.

To me, the risks of not going for a modular approach simply outweigh the risks of sticking with a monolithic approach. The opportunities to be gained by going with a modular approach, in my mind, far outweigh the opportunities we can likely predict in keeping the monolithic approach.

The standard will likely break current implementations no matter how we proceed. We must seize the opportunity to address the difficult things we haven’t addressed. We couldn’t address them before otherwise we wouldn’t have a spec to work with at all. We can do this now. By working with a diverse team representing the EU and other parts of the world, we can deliver a set of standards that will be relevant and significant, globally, for years to come.

A timetable for a modular approach could look something like this:

  • 1484.xx.a (the Data Model) – Draft: 2014-2015; Vetted: 2015-2016
  • 1484.xx.b (the API) – Draft: 2015-2016; Vetted: 2016-2017
  • 1484.xx.c (the LRS) – Draft: 2015-2017; Vetted: 2017-2018

The Data Model could be done quicker. The API probably should start once that’s kinda locked down. The LRS could start concurrently and will likely take longer because scoping where it really has to change from the existing specification is going to take a lot of time and discussion (and probably some debate).

These are my thoughts. IEEE LTSC is the appropriate place to figure out the timetables. International adopters, outside of NATO allies, are not able to work on this through ADL for obvious reasons, but they have come (and will come) to IEEE, and all sectors of adoption are welcome to work together there.

One More Thing

Even though I lean one way more than the other, while I lead this effort, every member of the LTSC who participates on the standards project (or standards projects) for xAPI is a volunteer with a vote. Starting with the intent to keep the spec as-is into standardization is no guarantee that it will stay as-is. Put another way, regardless of the path I scope, it’s necessary to know that current implementations will one day break for one reason or another.

What’s important to me, and what I think should be important to you, is the process by which the standards are shaped. The only way I can deliver a standard, or set of standards, that is better for learners, organizations and everyone’s non-trivial commercial interests in xAPI is with your active involvement and commitment to the standards effort once it launches.

If you care about what the standard will be, you will need to participate to protect your interest in it. That’s going to be a pain in the ass, but it’s honestly the only way you can hope to get what you need out of the effort at the very least, active participation will help you to “read the tea leaves” on what the future holds. I can see this through to the end and work with you to make it the best damn standard possible… but where everyone has a vote, no one can just wave their magic jazz hands and influence votes.

I’m committed to making better choices (no pun intended). I can’t possibly make everyone happy with the decisions I need to make, but I can read, I can listen and be wiser for it.

Thanks for staying with me this far. 🙂


by

Tags: