• Skip to main content
  • Skip to footer

MakingBetter

Skills and Strategies to Engage, Change and Grow Others

  • Data Interoperability
    • xAPI Camp Archives
    • xAPI Quarterly
    • Investigating Performance
  • Who We Are
  • News
  • Let’s Talk

Standards

What is the plan for DISC, xAPI, and ADL?

August 3, 2016 By Aaron

Towards the end of last year, our big announcement was the formation of the not-for-profit Consortium formed to handle the governance and address the evolution of xAPI with/for ADL. xAPI, being open-source and licensed as Apache 2.0, doesn’t require ADL’s permission for us to do this. That’s a feature, not a bug — we licensed xAPI in this way so that it couldn’t be trapped in US DoD the way SCORM was. All the same, ADL’s done an amazing job stewarding xAPI through its R&D roots to now and to whatever degree we can work in concert with ADL (and US DoD, by extension) it’s to our collective benefit to do so. Which is why we haven’t done a whole lot in terms of visible activity since the beginning of the year. We’ve worked out the details of what we’re going to take on with ADL so the connections between DISC and ADL can be very explicit, very concrete and very visible. ADL’s announcement about our collaboration can be found here.  Here’s a look at what we have planned over the next four years.

DISC’s Year-One Priority Outcomes

To ensure the interoperability of software and hardware that purports to be xAPI-conformant, DISC is revitalizing the xAPI Conformance working group and conducting a research effort to facilitate definition of the requirements for xAPI conformance. Concurrently, DISC will work with a stakeholder group to define requirements for software and hardware certification of xAPI conformance, and to make recommendations to ADL for a program that would confer certification on software and hardware with Learning Record Store (LRS) functionality.

DISC will develop and deploy a publicly available index of such certified LRS products for use by stakeholder involved in acquisitions.

The thing is, that focusing on LRSs is just the start. To make implementations of xAPI interoperable, we can’t limit the responsibility on LRSs. We also can’t focus solely on content for a lot of reasons. One lesson from the work done to test conformance and to certify content leveraging the Sharable Content Object Reference Model (SCORM) was that certifying work products, such as content, offered diminished return on the investment in capacity needed to efficiently certify such work products. After about two years of research and requirements gathering, we’ve determined that a program to certify professionals who create learning experiences leveraging xAPI, with practice-based evaluation of their work products, is an idea worth championing so DISC will work with a stakeholder group to define requirements for a such a professional, Learning Record Provider certification program. We’ll explore the requirements for the contents of a body of knowledge, the competencies needed professionally and the evaluation criteria. This will inform the requirements for a program that confers certification on professionals who demonstrate competence in creating learning record providers that work interoperably with xAPI-conformant LRSs.

And, just to make sure we’re all on the same page — this is all within year one, going into June(ish) 2017. Aggressive goals? Megan and I would say we’re assertive, but no less so than any other goal we’ve put out there (and met). We predict there are going to be some serious reasons why we need to get LRS certification ready for early 2017, in terms of catalysts that will drive market adoption of xAPI. The time is now, friends.

DISC’s Year 2-4 Activities

In our first year, DISC will establish a baseline for data interoperability of the xAPI specification itself. To strengthen that for the long-term, DISC is going to vet the impact of supporting JSON-LD on existing implementations of xAPI, and, should stakeholders choose to support JSON-LD, DISC will take on responsibilities to update any hardware, software and/or professional certification materials, processes, procedures and evaluation criteria to support that evolution. That seems really geeky, but it’s important that before we go bigger on professional certifications and larger services, we figure out what, if anything, we’re to do with JSON-LD as an industry. Three years ago, it wasn’t big enough for us to rally support to address it (argue about that all you want — it wasn’t). Now it clearly is, so we’re going to address it.

To strengthen data interoperability and to reinforce appropriate cybersecurity and information assurance practices in applications of xAPI within US DoD and other verticals, DISC intends to identify a profile of xAPI as an exemplar for US DoD and FedRamp such that it serves as a model for other xAPI stakeholder groups to use in extending the xAPI specification to work with various laws and policies with which xAPI may interact. In other words – FedRamp describes what US government needs in terms of security. Other nations — other organizations — may have more or different needs. So, by doing this, DISC would produce best-practice recommendations with the intent of providing advice for data privacy considerations around xAPI and how personal data ownership may be defined to inform cybersecurity and information assurance goals considering situation-appropriate operational, business and technical perspectives.

Help?

These are just the highlights. Megan and I had some incredible help organizing our scope and putting the mandate into actionable outcomes thanks to our friend and Project Czar extraordinaire, Wendy Wickham, as well as our Board of Directors.

Referencing our previous post, DISC activity is just the first topic we had to update you on. Next, we’ll talk to what we’ve been seeing drive xAPI’s adoption recently.

 

Filed Under: DISC, Open Source, Standards

With Gratitude…

December 22, 2015 By Aaron

With Gratitude

Megan and I find ourselves grateful, as the year comes to a close. 2015 gave MakingBetter an amazing journey that was full of surprises. Most were wonderful and some were very scary. Through it all, we found ways to make our clients happy and successful doing work we believed in. When we had struggles and found roadblocks, we worked together to get over whatever the hump was and we got to a better place.

With Gratitude

We created things with great people. Our projects ranged from designing and developing custom reporting for software products, training providers and large enterprises. We launched an online journal, the xAPI Quarterly, kickstarting the publishing arm of our business, Connections Forum, and we ran our first events, xAPI Camps that each were co-created with our participants. In January 2015, we planned for one and as the year closed we had four with five more scheduled in 2016. Our next one at the Autodesk Galleria in downtown San Francisco, February 11. We celebrated another amazing year with the amazing community, Up to All of Us, which will convene again in Sonoma County, February 12-15. We started a non-profit. More on that in a minute.

Grateful to Make MakingBetter Happen

MakingBetter at the Grand Tetons

Megan came on full-time with MakingBetter in June, this year. We took our first serious vacation ever in July. We spent a lot of time with our families and friends. We lived and worked, together, on our own terms for the first time in our lives. We dealt with emergencies and surprise medical concerns. We innovated when we needed to and we stuck to tried and true processes when we needed to, too. We lived well in 2015. I say all this because it’s important to celebrate success and to make sure that credit goes where credit is due. I write tonight grateful for a true partner like Megan, grateful for each and every client we had this year, grateful for each and every person who’s influenced how we do what we do, grateful for our sponsors and our partners and especially…

Grateful for the xAPI Community

It’s the xAPI community I want to talk specifically to now. There are a myriad of reasons why 2015 was good for Megan and I, but the one reason that stands out is the incredible gains in xAPI’s adoption that happened this year. We know there’s been incredible growth in xAPI adoption. Our business boomed and so did that of many software vendors who create solutions that are tailored to meet some of the many things people use xAPI for. We know projects are already being planned for the beginning of 2016 at a scale that equals the whole of xAPI adoption in 2015. These are measureable outcomes of an open source community that has been lovingly and painstakingly attended by the US Department of Defense and its particular initiative, Advanced Distributed Learning. xAPI is in every way a stunning success. It is proving that open government practices, a pro-entrepreneurial approach and an authentic embrace of open source can stimulate innovation, enable implemented approaches to complex and serious challenges, and catalyze economic opportunities. It is far from the applied research and development activity it was four years ago. It is a mature spec that is growing its own industry.

Grateful for xAPI’s Growth

xAPI Camp - DevLearn

xAPI is so successful that it’s actually becoming a challenge for ADL to support it to the scale it now demands. The Design Cohort program that began in 2013 became so well attended and populated that it couldn’t be supported by ADL anymore — they just don’t have the resources to do it on their own. The maintenance of the spec is labor intensive enough for the resources ADL has, that certification isn’t something they can handle on their own without stopping something else important. When SCORM was being created, it was an epoch ago for information and instructional technology, and ADL had over 40 engineers they could apply to SCORM alone. ADL now has dozens of high priority projects and there are maybe six full-time engineers they can resource for xAPI. Fortunately, those of us who brought to ADL the concepts that enabled xAPI’s creation knew that the day would come when specs and standards would need to move beyond ADL to truly mature. This is why open source was so crucial a path for xAPI. It’s because xAPI is licensed Apache 2.0 that anyone can take xAPI and mature it, and that’s just what we’re about to do, given ADL’s blessing and commitment to participate in the effort.

Grateful to Serve Our Community

The non-profit we started at the close of 2015 is the Data Interoperability Standards Consortium, or DISC (because, acronyms). There are many challenges to working with data: interoperability, security, privacy, professional competencies, validation, provenance, ethics, legalities, languages, formats, etc. We intend for DISC to offer the table where all communities of practice, individuals, organizations, governments and industries can work together to meet the complexities of working with data. It’s about more than xAPI, but make no mistake, xAPI is our priority in 2016. The transition from an ADL-organized xAPI Community to a DISC-organized xAPI Community will begin in the first quarter of 2016. By the middle of the year, we’ll have established working groups and special interest groups to explore ways in which xAPI may be extended as well as certification requirements. By the beginning of 2017, we’ll have a certification program in place and an array of tools that will make working with xAPI’s vocabulary much much easier.

That seems like a lot to get done in one year, and you’re right. It is a lot. But it will be done because it has to be done. xAPI is growing so much that if we don’t have certification in place by the end of 2016, we risk xAPI’s long term future. We predict a massive catalyst for international adoption to emerge by the end of 2016 in the way of procurement requirements for governments around xAPI, because having data that everyone can understand and can make use of is in the interest of public good institutions. When governments are a year away from requiring xAPI support and certified products are all that will be purchased, it makes right now the very moment where xAPI goes big. It is exciting, frightening and uncertain – and it’s fun, and this is what it’s like for us to be so fully invested when the stakes are this high. The fact that the stakes for xAPI are this high should be the reassurance everyone needs that xAPI really is a big deal and it’s worth our sweat to invest in its growth right now.

Grateful for a Future We Can Forge Together

Seattle Gathering

Because xAPI is open source, and because xAPI will have an organization that is focused expressly on its maturity, it’s going to get the chance to grow in a way that no learning technology has ever had the chance to do. Megan and I are proud to have an incredible team on DISC’s Board of Directors from around the world who represent years of extraordinary work in leading professional organizations, the science of learning analytics, the development of industry organizations, professional practice and xAPI itself. Very soon, we’ll announce our founding Board of Directors and post our by-laws, our 2016 goals and objectives and we’ll open membership. xAPI will forever be Apache 2.0 and we intend to ensure that it remain open source and cared for by an open community as long as it remains relevant. The organization we’re creating will finally structure how decisions about it are made, balancing the needs of those most invested in xAPI requirements with the needs of those most impacted by xAPI applications. Without the burdens and caveats that come with moving this activity into large spec and standards groups, as a community and an industry with many verticals, we can design our own future with xAPI.

Grateful for Your Help In What Comes Next

There’s been only a few sketchy roadmaps for what Megan and I have been doing together as MakingBetter. There are even fewer notes on what we’re about to do with DISC in forming an industry organization to support a major open source project with the cooperation of its stewards in the US DoD. But, this isn’t the first time Megan and I have had to work with a community to create something that didn’t previously exist. We’ve done it with Up to All of Us. We did it with growing xAPI into a fully realized community of designers, developers, content and data wranglers. We did it with figuring out how to fit open source for US government. And now we’re going to figure out, with the full interest by and for the community, how we grow the industry and professional practice around xAPI. It will require paying members and continued open community participation. It will require a level of dedication, enthusiasm and grit that hasn’t been demanded yet. Given all that, I’ve never been more confident in our abilities, all of us together, to figure this out. We’ve been able to plan and go off-plan and get this far. It stands to reason we’re going to go a lot farther together.

Megan and I are staking our business on xAPI. We’re staking our families on xAPI. We’re committing our lives over the next couple of years to the community and industry around xAPI and we are grateful to do so.

We wish you all the best for this holiday season and for the new year to come. We’ve loved hanging with you. We’ve loved working with you. We’ve learned so much in doing so and we can’t wait for the next level shit about to happen!

More soon!

Filed Under: DISC, Experience API, Standards, Uncategorized

The 3rd Gear

August 5, 2015 By Aaron

Cruising down a highway through Montana in a '15 Ford MustangAs xAPI shifts into 3rd gear, with an early majority comes a need for a consortium that will steward xAPI into perpetuity — a table for other industries to sit and work out the ways xAPI will meet their particular needs. I’m talking about HR systems, medical devices, folks who make beacons and sensors, manufacturing, energy companies, engineers, school administrators. There are other groups who we aren’t talking with already and they’ll make their needs known. We’ll have certification tests. This will make it easy for folks buying software or hardware to see a stamp of approval — a neutral third party guarantees a product is 100% Grade A xAPI. With it, the industry that makes xAPI software and hardware will have to grow up. With it, new job titles, new practices, new competencies and new goals will be born out of those titles, practices, competencies and goals we already have, from wherever we are are.

Megan and I had a great, long, long-overdue and well-earned vacation road-tripping through mountains before and after xAPI Camp – Amazon. Reflecting now it couldn’t be a more apt way for us to vacation and get ready for what’s coming. xAPI is gaining highway speed. It’s gonna be surprising and exciting. We’re going to work harder than ever to keep apace with adoption, solidifying what we have and making it flexible for even more adoption. With every climb, we’re going to see something way more epic than before.

In Arapahoe, view from a windshield about to turn into a curveSo I say, “Welcome,” to the coming early majority. Welcome to xAPI.

Thank you to everyone who makes xAPI Camp happen. Our hosts. Our supporters and sponsors and partners. Our speakers! Brian Dusablon, who’s one of the best Experience Ninja around, working invisibly behind the scenes to make each event amazing for our participants. And to our participants: you are creating an active and vivrant community that is going to transform multiple industries. Through your activity and participation, you’re building something huge that’s purely driven by a shared goal: to improve ourselves, the places we work and the world we live in.

Let’s go, team. We all got work to do. We now have more horsepower and we’re gaining speed. Let’s keep our eyes on the road ahead and not just the mile marker we hit.

Put the pedal to the metal — and hang on. 🙂

Filed Under: DISC, Standards, Uncategorized Tagged With: adoption, consortium, xAPI

xAPI's Tipping Point for Both Community & Industry

January 7, 2015 By Aaron

"We're gonna need a bigger boat"

Since last year, I worked with IEEE to standardize the Experience API (xAPI) specification. Most of my work is with the IEEE-LTSC  but recently a few other IEEE standards groups expressed interest in using xAPI. These groups focus on developing personal health devices as well as the Internet of Things. They’re looking at xAPI because it provides a way to share data across devices, related to people, in a way other systems can understand. This can happen, potentially, with zero configuration. Right now, these groups don’t have a way for devices to talk to each other that all the different players can agree on. They’re not interested in reinventing the wheel.

This is exciting stuff for everyone with a hand in making xAPI what it is. What it could mean is a tighter relationship between learning activity and its outcomes. Meanwhile different needs among xAPI adopters in the learning technology business are becoming clearer. Many products today provide solutions for experiencing content on a variety of devices. Specifically, devices that were never supported by SCORM. Some products offer solutions that analyze xAPI data. A few products focus on content management. Others track activity on websites, simulations, and sensors. What’s emerging are learning activities that look a lot more like work or straight up computing.
[Read more…] about xAPI's Tipping Point for Both Community & Industry

Filed Under: Experience API, Standards, Uncategorized

The Community Responds

September 5, 2014 By Aaron

I hoped in my last post that the community would indeed respond with ideas, insights, opinions and concerns. Thank you for your comments on the post, the retweets and feedback on Twitter and for what I’m sure has been a lot of not-public rumination. The feedback thus far has been incredibly helpful and I will attempt to summarize and then clarify a couple of points.

Most everyone agrees the data model, at the very least, could/should be broken out on its own. In almost every comment on the blog and sentiments shared with me via emails and Skype and instant messages, there’s near universal support for a modular approach to standardization.

Ideas

While we seem to all concur that we’d do well by breaking up the spec into smaller components, there are ideas shared by the community so far that differ from what I posted — specifically on what lines can/should be drawn around different parts of the existing specification.

  • A “write-only” API vs. a “query” API.
  • A fourth standard to cover retention policy, privacy, protection, access, delegation and security.
  • A separation of the Data Model (statement structure and its components) from the Data Binding (JSON), effectively making it possible to work with XML, or other bindings, as an alternative.
  • Consider the separation of the spec in terms of the data (JSON), the API (REST)… maybe not specify the LRS at all.

Concerns

There are a couple of concerns that seem to resonate pretty strongly among the community.

  • What must be accommodated in terms of privacy and security, at a standards level, is unclear. Use cases representing the various challenges to current implementation of the specification are needed — thinking about the EU, UK, Germany, Australia… California(?). We must also seek out privacy and security laws in other parts. Only then can we specify how to accommodate for the various use cases.
  • The current specification group in ADL is a logical group to do the splitting of the specification as it exists today, rather than IEEE which has less representation of the people who developed and have implemented Version 1.0 of the spec. To hand the spec as it is over to another group to perform surgery on it may make for a longer process, whereas the current spec group could accelerate this part of the effort.

Clarifications

I want to again thank you all for the feedback, and if you have more to share, keep it coming. I’ll update this post if there’s more coming.

I agree with the idea that breaking up the current specification doesn’t have to break current implementations. It is certainly not my intent to break what works today. I helped build this with you. I don’t want to see it destroyed… but I’m not going to lie or promise that it won’t break, because it won’t be up to me. If stakeholders in the current xAPI specification group are involved in standardization through IEEE, then they will be the best hedge against breaking current implementations.

I am not tied to the exact divisions of the specification as I laid out in my earlier post. I proposed the divisions because, frankly, a discussion was needed and something to react/respond to is where it starts. I do believe that in some form or other the “statement making” part of the spec needs to be on its own because the use-cases for it are numerous.

I’ll have more to share as appropriate. I wanted to take an opportunity now that the dust has settled a bit to summarize what I’m hearing, to thank you for sharing it, and to encourage you to keep sharing.

Filed Under: Community, Experience API, Standards, Uncategorized

Standard Options Apply

September 2, 2014 By Aaron

This post is for vendors, developers and implementers of the Experience API and for anyone who may one day create something with xAPI. The choices made soon will ultimately impact what you will be able to do with xAPI 3, 5 and 20 years from now.

I’m going to lay out options for how we can approach standardization and the reasoning behind them. I’m going to ask you as vendors, entrepreneurs, developers, designers from all around the world to provide me with your counsel as I ultimately must direct the effort. I ask you to support your interests with your participation as we go forward.

This is a long read. There’s nuance that’s not easily summarized. Please make the time to read, comment below, post responses that link to this statement.

In August, 2014 I participated in one of the IEEE Learning Technology Standards Committee (LTSC) monthly meetings. This particular one was special, as we were formally reviewing the Project Authorization Request (PAR) for the Experience API to become a formal standards project. Once the request is made and approved by IEEE’s New Standards Committee, we begin the last leg of a journey that started with friends riding roller coasters in Cedar Point amusement park in Sandusky, OH back in 2009.

The PAR wasn’t approved by the LTSC in that August meeting. It wasn’t the slam dunk I was naively hoping it would be. There were questions raised by the committee that may have easy responses, but the easy responses I can share aren’t necessarily the better responses we need.

This has me reflecting deeply about what kind of future with xAPI we need to enable. So here goes.

We (as citizens of the world) generally need better responses to the tiny events that color the big picture. The last couple of months, looking at the recent events in the US and around the world, looking at our own work with and in organizations that are dealing with a stagnant economy, looking at ourselves… looking at myself…  it’s so desirable to do the easiest or simplest thing in any given scenario. It’s impossible to figure out the best thing to do, because the future is filled with rabbit holes and we never can go down all of them.

We must be mindful of our options and deliberate about our choices.

When we’re talking about xAPI, we must appreciate that there are already millions of dollars invested (in the value of people’s contributed thoughts, their time and actual capital) in development and adoption. However, we have to also be mindful of the billions of dollars to be invested in xAPI going forward.

If SCORM taught us anything, it was these two things:

  • First, it taught us how to make real money in learning technology by formalizing how we commonly approach enterprise software;
  • Second, it taught us how costly it is to not be mindful or deliberate about our choices, technical and political, at the specification and standards level.

I can feel some of you bristling already about the focus I have on the financial perspective. My perspective is this, and I can’t say it strongly enough: there’s no way we can make real change in learning, education and training without it being financially viable. Money is what makes things happen. I feel a responsibility to make sure xAPI is designed well enough to encourage the investments that make the promises of the spec actionable.

As an industry, we’ve gotten this far, this fast, with xAPI, and must continue to do so, precisely because people can find ways to profit from their investments in sweat, time and capital AND make the world easier to learn from. I want to make it as easy as possible for people to innovate and solve real-world problems with this specification. I want to encourage it by keeping it as open as we can AND by making it possible for the best approaches, not just the best spin, to find adoption.

We’re on the verge of something. We can take this open specification and transform it into an international standard that will catalyze data interoperability across systems. Done well, this enables people to “own” their data, promoting citizenship and personal autonomy in a world that’s more and more digital. Or… we just take this open specification as it is, and try and keep the scope to simply transposing it for standardization, ensuring that adoption years from now will look pretty similar to what it looks like today… which looks exactly like SCORM.

As the leader of this standards effort, I want to hear what you have to say. I want to consider diverse opinions and insights that don’t come from within my echo chamber. In the end, I will ultimately make the decision about the scope of the standards project.

These are the rabbit holes and trying to go down them all, repeatedly, is exhausting.

Consider Breaking Up the Spec Into Separate Standards Efforts

Some in the LTSC are very familiar with the European Union’s policies on privacy, security, data ownership and the rights of individuals in digital spaces. In response to their concerns about “tracking,” which rightly furrows eyebrows and adds wrinkles prematurely to us all, a suggestion that gained momentum was that we consider breaking up xAPI into three separate standards efforts — three different documents to be linked together. Doing so would make it possible to isolate the areas of the existing spec that cause concern. This approach has some advantages that I’ll expand on.

Think about this like we think about WiFi. “WiFi” is essentially a set of IEEE standards — its project number is 802.11. There are different forms of WiFi that co-exist — 802.11a, 802.11b, 802.11g, 802.11n… Each does a slightly different thing, but altogether any/all of these are “WiFi.” This is the frame to consider for the Experience API. “xAPI” will have its own project number (1484.xx) and it would look like this:

1484.xx.a – A standard for the Data Model would describe how xAPI statements are formatted. This would remove the need to necessarily use the Statement API or have a Learning Record Store to store data in statement format. Since the data model can be applied generally, it means that there are lots of ways statements can be used, which would encourage more adoption by lowering the barrier to entry, which (in turn) could influence a lot more activity providers. You may ask, “Why would someone only want to use the data model?”

Real use-case: One current “adopter” of xAPI is only using the data model, without an LRS. I put adopter in quotes because, according to the spec, without the LRS, he’s not conformant. Anyway, in his implementation he’s using the JSON binding for Activity Statements to track what people are doing in his software, in the context of how people use the software to accomplish specific tasks. He’s storing the statements in his own database and has no reason to share them with another system. He’s not taking in statements other than those he’s designed. He is simply using the data model to track activity in a consistent way in case one day he does need to share them, but right now there’s no reason to incur the cost of an LRS or use the Statement API.

1484.xx.b – A standard for the Statement API would then act as the means to validate statements made, whether in an LRS or not. As it is now, an LRS is really useful in concept for data transfer, but most adoption currently isn’t around sharing data across LRSs, and if you’re into doing more “big data” (or more apt “messy data”) mashups, an LRS only keeps xAPI statements. What this would then allow is the means by which any database or web application could let in or keep out statements that are junk, to use xAPI statements as a system might use any other data source. You may be asking, “Why would someone only want to use the Statement API?”

Real use-case: Some of the largest educational publishers are implementing the Statement API and data model into their existing internal data storage to validate xAPI-formatted Activity Statements before accepting them into their data warehouse, along with all sorts of other data they’re tracking. They have no intention of sharing this data with any other system, and they don’t want the segregation of xAPI Statements from the other data they’re collecting. Rather, they want the xAPI data co-mingled with these other data to get a fuller analysis of how people are using their materials.

1484.xx.c – A standard for the Learning Record Store would focus on the portability of data among, the authentication and interfaces to connect various functions with other systems. Creating an LRS is the most difficult and complex part of xAPI, and its uses are scoped only to activity statements that are valid xAPI statements. Anyone who’s built an LRS themselves loathes the complexity of the work that will be involved in figuring out the privacy, security, data ownership, transport and exchange mechanisms that we’ve put off because they were too complex… but if we want real international adoption of xAPI, this will need to be addressed for the European Union. Or it won’t… and the failsafe is that the above two specifications can garner international adoption without a lot of pushback, and LRSs as they are can exist where they can.

Currently, adoption of xAPI is very LRS-centric. I personally believe that the LRS is not the most valuable part of xAPI. I enthusiastically embrace LRSs as a product category, but it’s important to remember that LRSs-as-discrete-applications was never the the intent. Rather, an LRS describes a scoped set of functionality that could be part of any app, any software, anything that reads data generated by another app or piece of software. The LRS currently is the most marketable concept people understand because we all can relate an LRS to our expectations of what a learning management system does. The key to the long-term value of standardization comes not from a spec that revolves around an LRS, but a spec that is focused on the data itself and the myriad of ways it can be exchanged. As my friend Steve Flowers put it, think about LRSs as antennae, not fortresses.

You are likely asking, “Why would anyone want to use the LRS without the Statement API or the Data Model?”

Real use-case: Companies (plural) tried to build Personal Data Lockers. They wanted to make it possible to share a learner’s activity data across systems — not just keeping it inside one LRS. Rather, the intent was to have the data follow learners across systems. Rather than think of an LRS as a fortress that holds all the data, these companies were trying to follow the original vision of the LRS as antennae that send and receive data that follows the learner wherever they go. These implementations weren’t fully conformant to the spec, because sharing data according to the spec as it is… well, it is really hard. Ironically, in the two cases I’m thinking about, both companies turned their attempts at Personal Data Lockers into full LRS products.

“xAPI” would then be the term that describes the general set of standards and what they enable, but each individual standard deals with something distinct, supporting the greater whole.

The hope in this approach is that the xAPI specification community has more or less held a high level of amity before, as the spec was being developed. The shine may have dulled a bit in the last year and a half as competing vendors polish their own chrome more (and some are admittedly better at it than others) but this approach may well forge new opportunities for cooperation and competition, as well as sweeten the honeypot of adoption. We need to make xAPI friendly for more adopters — starting with those who have chosen not to use a standard and built something proprietary because they couldn’t adopt only the part of our spec they needed. If we can really spur more interest and adoption, widen the possible ways in which people can adopt, every vendor participating stands to gain in a larger market. By making it easier for people to adopt specific parts of the would-be standard, we enable use cases on the fringe of our imaginations that may emerge as the strongest and most valuable use cases. By making the LRS its own standard, the things that were really difficult to address at the spec level — like how data is shared across different LRSs — would be given their due attention. By making the Data Model and the Statement API their own standards, we enable adoption for use-cases where lower barriers to entry are needed. By making the Data Model its own standard, we encourage more Activity Providers. Given how LRS-heavy adoption of xAPI is now, we need this to grow.

The risk in this approach is that it will be a freaking difficult path. It will likely break current implementations. To be honest, I don’t have a pony (read: product) in this race and breaking changes in any approach to standardization are inevitable. I don’t personally stress about that part. I worry much more about how the LTSC can manage three concurrent standards projects that must work together. It requires a lot of attention and participation, and the kind of cooperation and amity among competing interests that sometimes fails standards groups. It will take longer to create the standards this way, though some things — specifically the data model — may be able to standardize sooner.

Consider Keeping the Spec “As-Is” In One Standard

All of the above stated, it is admittedly tempting to try and keep the spec as it is, even if it narrows the spread of adoption.

One reason is that there are over 60 vendors adopting it with no interoperability issues with the data — but there are few who have tried to share data across different LRSs. Still, that’s a pretty damn significant reason on its own merit. Over a year past its release as Version 1.0, there are as many (or more) open source LRS options as there are commercial options. As I said before, the LRS isn’t really where the magic is at with xAPI, but given the framing of the specification the shape of the conversation around xAPI is the attempt to answer the question “What can an LRS do for your organization?”

To be honest, that’s a question that at least has a more immediately tangible response — an easy response. I don’t love that question, but as a pragmatist and someone who wants to see things get done and catalyze economic growth not just to make existing vendors more wealthy, but to encourage new players to compete on a level playing field so that the best products find adoption… (breathe) that’s a framing that’s focused and easy to design and develop for.

If one wants to consider that xAPI is designed to solve a fixed set of issues as a response to some current (think the last five years or so) challenges with eLearning (particularly with how we approach communication with an LMS outside of that environment), while incomplete on its own for eLearning, xAPI is an amazing success story. That we can use web services and describe a consistent (albeit imperfect) approach to handling offline activity and syncing localized activity data back to an LRS… this is a huge advancement beyond what we’ve done with SCORM — even as we acknowledge that it doesn’t replace SCORM. People still need content interoperability. xAPI is about data interoperability. They are not the same thing, and modeling our approach to data on how we approach content is tempting, but misleading.

The hope in keeping the spec together in one document is that, well… it’d be easy, right? It’s an existing spec. It works. People use it. One can argue as I have above that it’s not supposed to be about the LRS, but practically speaking, it is whatever it is. There’s plenty of room to innovate and differentiate with the specification as it exists. It may be imperfect, but it does work beyond just fixing things that we eventually figured out were really stagnating about SCORM. If we could get the scope through LTSC and IEEE’s New Standards Committee, it might only take two years and we’d have one legitimate standard that could be adopted internationally.

The risk in following this path is that we’re ignoring the opportunity to create something better. While going this path doesn’t necessarily shut down the ability for people to own their own data, or to move data around from system to system, or even to make that transfer more secure and respect privacy, we’re forever linking the components above so tightly that it will stay a closed loop. Only the learning technology community will care for this and adopt it, making it difficult for HR, Enterprise Management and ERP systems (let alone audiences we’ve never talked with who might just want to adopt the data model) because, well… it’s “learning” and it requires “all the things” in the spec to adopt. And whether you care about adoption in the EU or not, the smart money says we need to look beyond learning departments inside of enterprise. Talent is the new Learning, and if we’re to do something that finds meaningful adoption, we risk missing greener shores. And let’s not forget what happens if we need to ever revise this one document. Should we ever need to make a change — even something as simple a new transport mechanism, or even the structure of a statement — the whole spec is going to be opened for revision. It’s near impossible to to effectively manage an international standardization process that restricts scope at the document level.

The EU may not be interested in xAPI as one document that reflects the current specification because of its ambiguities around security, privacy. They may be justifiably squeamish about tracking. As Avron Barr reminded me, we’ve certainly seen with the vehement rejection of InBloom that even in the United States, we all have some concerns about the privacy and security of learning data. Certainly, though… corporate, government and military interests in Asia and Latin America may embrace the spec as it is, simply because it solves a set of very painful problems and it does that well. And… even in the case of the EU, while the standard may likely break current implementations, it’s possible to focus the accommodations for security and privacy concerns on the areas that are prone to remain stable. Still, though, the way the spec is now, it forced adopters to collect data and to make it sharable. That’s not in the best interests of every organization.

Where I Stand

I debated weighing in myself on where I stand, but for those of you on the fence, maybe it will help you wrap your head around this nuanced issue. I personally lean on the side of breaking up the spec into three standards.

My wise friend Tom King put it like this:

“This issue could be framed as a core issue of monolithic versus modular. Or perhaps framed another way– what makes a spec, any spec, good?

A monolithic approach has a few key benefits. And it seems better when there is no concern about backward compatibility and limited concern about forward flexibility. It can also help with clarity as compatibility and adoption is “all-or-nothing” for the players. As a “1 document” spec there is just one big piece to manage- likely a speed advantage if document processes offer zero parallelism– and the ‘go backs’ all happen in one larger process if changing spec’d functionality in one place impacts a different spec’d functionality.”

A modular approach has its benefits. Tom shared his thoughts about the ACID test for databases. ACID stands for atomicity, consistency, isolation and durability. These are goals that every database should strive for, and when a database fails at any one of these, it is considered to be unreliable.

Tom asked me, “In this light, what makes a standard ‘reliable?’ Does one approach favor more or fewer of the ACID elements?” A modular approach is certainly atomic; it helps to ensure there’s consistency going forward for each component; it isolates the potential impact of changing any one component without needing to change the other components; it ensures that the pieces, should they never need a change, can endure and find more and more interesting uses. Not that I think of only this litmus test, but it’s a good litmus test.

While the investment of thought, time and money that have gone into xAPI so far are significant, like Tom, I don’t know of any organization that is currently so dependent on xAPI as it exists today that their bottom-line is at significant risk by changes to the current spec or delay in standardization. Especially when I consider the long-term.

If we go with the monolithic approach, it will likely make it difficult for people to innovate beyond the initial vision. We can’t foresee all the architectural decisions that would constrain us down the road, but we know from our history with learning technology specifications that something as simple as the requirement that SCORM’s API be presented in a “web browser” crippled any natural evolution or innovation. As Tom wrote to me, “Why couldn’t someone use the non-verb-value model just for writing/storing objectives, or assessment criteria or gap analysis?” The way the spec exists today, they can’t, but it seems to me they should.

Once the standards are established, the investment and dependency on them will only increase as long as the standards are usable and useful. As Tom suggested to me, we need to adopt an approach that is both responsible and sustainable. We’re setting up a standard that, like SCORM, will impact industries for 20-30 years (at least). If we bundle too many big pieces together into one document, we’ll render the standard inflexible.

To me, the risks of not going for a modular approach simply outweigh the risks of sticking with a monolithic approach. The opportunities to be gained by going with a modular approach, in my mind, far outweigh the opportunities we can likely predict in keeping the monolithic approach.

The standard will likely break current implementations no matter how we proceed. We must seize the opportunity to address the difficult things we haven’t addressed. We couldn’t address them before otherwise we wouldn’t have a spec to work with at all. We can do this now. By working with a diverse team representing the EU and other parts of the world, we can deliver a set of standards that will be relevant and significant, globally, for years to come.

A timetable for a modular approach could look something like this:

  • 1484.xx.a (the Data Model) – Draft: 2014-2015; Vetted: 2015-2016
  • 1484.xx.b (the API) – Draft: 2015-2016; Vetted: 2016-2017
  • 1484.xx.c (the LRS) – Draft: 2015-2017; Vetted: 2017-2018

The Data Model could be done quicker. The API probably should start once that’s kinda locked down. The LRS could start concurrently and will likely take longer because scoping where it really has to change from the existing specification is going to take a lot of time and discussion (and probably some debate).

These are my thoughts. IEEE LTSC is the appropriate place to figure out the timetables. International adopters, outside of NATO allies, are not able to work on this through ADL for obvious reasons, but they have come (and will come) to IEEE, and all sectors of adoption are welcome to work together there.

One More Thing

Even though I lean one way more than the other, while I lead this effort, every member of the LTSC who participates on the standards project (or standards projects) for xAPI is a volunteer with a vote. Starting with the intent to keep the spec as-is into standardization is no guarantee that it will stay as-is. Put another way, regardless of the path I scope, it’s necessary to know that current implementations will one day break for one reason or another.

What’s important to me, and what I think should be important to you, is the process by which the standards are shaped. The only way I can deliver a standard, or set of standards, that is better for learners, organizations and everyone’s non-trivial commercial interests in xAPI is with your active involvement and commitment to the standards effort once it launches.

If you care about what the standard will be, you will need to participate to protect your interest in it. That’s going to be a pain in the ass, but it’s honestly the only way you can hope to get what you need out of the effort at the very least, active participation will help you to “read the tea leaves” on what the future holds. I can see this through to the end and work with you to make it the best damn standard possible… but where everyone has a vote, no one can just wave their magic jazz hands and influence votes.

I’m committed to making better choices (no pun intended). I can’t possibly make everyone happy with the decisions I need to make, but I can read, I can listen and be wiser for it.

Thanks for staying with me this far. 🙂

Filed Under: Community, Experience API, Standards, Uncategorized

IEEE "State of the xAPI" Survey Results Released

June 2, 2014 By Aaron

With over 40 different vendors and 80 different commercial and open-source tools, I’m happy to release the results of the first State of the xAPI: Tools Survey on behalf of the IEEE xAPI Study Group.

This data has been collected from vendors voluntarily sharing information beyond just simply stating they’ve adopted. In this comma-separated-value (CSV) file, you’ll see which tools support the Experience API at Version 1.0 and/or Version 1.0.1 (over half of the tools support it), which should provide some insight about what tools on the list might work well with others. The survey results in this form are licensed CC0, which is a Creative Commons license meant for public domain. As long as you cite it, you are encouraged to use this information.

The survey ran for the last two months — from April through May, 2014.

In the next month, I’ll prepare an initial draft reporting on the results of the survey. I will do this in GitHub as a Markdown document to encourage others to help author (and be attributed for their work). This report will be licensed CC-BY, also meant for the public to freely cite as needed.

So what are you waiting for? Get the State of the xAPI: Tools Survey results today!

Filed Under: Experience API, Open Source, Standards, Uncategorized

Answers: Where are Discussions About Experience API?

March 25, 2014 By Aaron

In the last couple of weeks, we’ve started to get some great questions about getting oriented to working with the Experience API. For Megan and I, we’ve been so involved from the ground floor (as a lot of the community is) that we sometimes forget that people are finding out about it anew… all the time.

One question we received was pretty basic: Is there a discussion group around xAPI that is already a great place for having discussions about it?

Well, there are several. I’ll give a run-down of those I know and I hope, through comments and other feedback I can add on to this list.

General Discussions

While general discussions (ie not vendor-specific) happen almost everywhere, these forums and mailing lists have the most to offer people in terms of what people are generally talking about with xAPI.

ADL Mailing Lists

Advanced Distributed Learning (ADL), which is the organization that has hosted  its development, has two main mailing lists:

xAPI-Spec is concerned with the nuances of the specification itself, and if you’re a technical developer working to implement the API into your content or media, your web application, your device… really any kind of technical implementation, this is the mailing list you should hit first, as the most knowledgeable people who are involved with the technology use this list.

xAPI-Design is a mailing list focused on a growing community of practice around designing with xAPI. These are people who are openly participating in open cohorts hosted by ADL to experiment with ideas that use the API to approach real-world challenges.  Even if you’re not participating, it’s a good place for discussion around what you can do with the API that isn’t just an off-the-shelf solution.

LinkedIn Groups

LinkedIn has three groups I’ve found that are focused on the Experience API. There is not much discussion happening on them, and there is generally a good bit of cross-posting across them, but these groups are good places to catch links to posts, webinars and events of broad interest to the community around the API.

One group is simply called xAPI (Tin Can). Another is called Conform 2 eXperience API and it requires moderator approval to join. The most active of the three is moderated by Rustici Software: the Tin Can API User Group and there are some actual discussions on there.

IEEE has a Google+ community that is focusing on identifying broader implementation requirements as we prepare to take Experience API into standardization.

Deeper Dives

I expect that as soon as I post this list, I’ll find out about a lot more mailing lists and I will add on. 🙂

Learning Locker has a mailing list that is fairly active with implementation-focused discussion around connecting to it and other learning record stores (LRSs). One active thread currently is around how to approach mashing up xAPI and Mozilla Open Badges in a way that the community will generally adopt. This project also has a Google+ community.

I’m sure there are more tool-specific or implementation-specific discussion groups and forums. Please spread the word and I’ll keep adding on here.

Filed Under: Community, Experience API, Standards, Uncategorized

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Footer

Sign up for email updates

Learn about xAPI

  • About xAPI
  • Adaptivity
  • Authoring Tools
  • Case Studies
  • cmi5
  • Content Strategy
  • Data Strategy
  • Learning Analytics
  • xAPI Statements

Learn More

Upcoming xAPI Camps!

Search

  • Home
  • Connections Forum
  • Investigating Performance
  • xAPI Quarterly
  • xAPI Camp
  • How We Can Help
  • Case Studies
  • Learning Tech Consulting
  • Talk to Us!
  • News

Copyright ©2014-2022, MakingBetter d/b/a Bowe & Silvers Partners LLC.

 

Loading Comments...