• Skip to main content
  • Skip to footer

MakingBetter

Skills and Strategies to Engage, Change and Grow Others

  • Data Interoperability
    • xAPI Camp Archives
    • xAPI Quarterly
    • Investigating Performance
  • Who We Are
  • News
  • Let’s Talk

Aaron

Talking ‘Bout Praxis

February 22, 2021 By Aaron

I mean, eventually, we are talking about practice; today, though…

Today, I begin sharing my praxis around learning analytics. That praxis includes designing and engineering feedback loops with xAPI.

2020 sucked, but there were silver linings. Commuting to the office came to a halt, and so did about four hours worth of transition activities between home and office suddenly appear every day. Add to that meetings that people just stopped happening. Consulting Elsevier where I manage their Learning Analytics strategy, the planning and architecture in 2019 executed in 2020.

Customers were using a product and content based on things I actually had some hand in. I spent 2020 preparing what happens to take an MVP and deliver all the value envisioned for it.

Praxis?

I can write diatribes about learning under lockdown. This year, I’ll do some heavy lifting this year to support the role praxis has in creating feedback loops. Praxis accelerates professional practice by encouraging constructive discourse around the mechanics, theory and criticism of how professionals perform their practice. Megan and I encouraged such praxis around learning analytics with xAPI Quarterly and xAPI Camps

Since 2017, as I learned how Elsevier works from the inside, I figured out how to jettison some dependencies on legacy, siloed structures. In that same timespan, xAPI Profiles matured, as did xAPI.

I’ve shifted from the theoretical—specification creation—to the practical. With the work I’ve done for countless organizations in modernizing learning architecture, Elsevier is the first place I’ve worked where I’ve been able to see projects through to their fruition. Because of the lockdown, my life slowed down enough to reflect, to evaluate, to process and learn and, ya know, do things differently.

Thanks to standard tuition reimbursement for professional development at Elsevier, I finished 36 weeks of coursework for my Lean/Six Sigma Black Belt certification from Villanova in May 2020. Those extra four hours a day really came in handy for that effort, let. me. tell. you. I acquainted myself with the statistics practices I learned through my Math Education minor. Now, with actual business priorities associated with them, I had needed context to understand the real power behind this math I knew for years—the theory and the practice—to put that knowledge to use.

We’re talking about praxis.

It’s probably important to consider some inherent conceits in my practice. I tend to think of the work I do needing to hold up for 100 years and, still, be re-composable. Not necessarily because my work is so good… so much as digital works using xAPI just might be in use that long. Think of how enduring SCORM is as a technology. Look up the oldest eLearning you have in your LMS. Consider the 47 BILLION dollars spent internationally last year on learning technology upgrades, around the globe. NOW consider how much money will be spent every year on NEW learning technologies (that are increasingly going to be based in xAPI because…. reasons…). And think of how much harder it will be for the next evolution in technology to supplant all this new infrastructure.

I can’t deliver anything related to xAPI that isn’t built to endure. Learning tools should fundamentally behave flexibly, nimbly, as if it was designed and engineered for professionals like us to be so easy to keep working with. I know I’ve done quality work when the workflows that engineers and authors and administrators and learners deal with around learning technologies is more fluid, clears the obstacles to have a rich and rewarding learning experience. My passions and expectations around quality in practices of learning technology may not be entirely rational on their surface, and I can acknowledge that much 😉

Anyway, a result of such passions is that I was very happy to have had the opportunity to collaborate on some papers. Kirsty Kitto of UT-Sydney introduced me to John Whitmer of Schmidt Futures Foundation and through several online chats over 2020, I we wrote a position paper for the Society of Learning Analytics Research on Creating Data for Learning Analytics Ecosystems. Through those online chats, John particularly got me thinking a lot more clearly about the telemetry of learning data as a bit distinct from the analytics. Telemetry is about the engineering of the data — how it’s structured, where it goes, how do you get it out into a report.

Currently, practice is usually limited to what can be analyzed based only by collecting and counting things in data produced by learning content. Hint: that’s just how you collect metrics.

It’s the ways you start to operate and make decisions, even as simple as what data or decision is made and when, that start leading down the road to a learning analytics strategy. When that telemetry can be governed in ways that align in one place with the business/learning/analytics strategy (eg. xAPI Profiles), we can scale this work. With scale and impact in mind, after the election in November, John approached Megan and I with an opportunity to contribute a policy paper for the incoming Administration. Reflecting our thoughts around the potential to rapidly scale the production, collection and analysis of educational data based on the potential in the xAPI Profile Server, the Day One Project published our position on Improving Learning through Data Standards for Educational Technologies.

Wanna talk about praxis too?

xAPI Profiles standardization runs through IEEE, led by yours truly. We meet on the fourth Tuesday of every month, 3:30 Eastern. With a team that includes Will Hoyt of Yet Analytics as vice/co-chair, we’re going to have agenda and action items (homework) for participants with a wide range of technical insight/ability. If you’re new to xAPI Profiles but you know you’re going to need to put them to work, this is a good time to jump on-board as we’re going to really start the work in March 2021 with the idea that we’ll wrap standardization by March 2023. Already, there are multiple efforts that will run concurrently.

IEEE 9274.2.1 identifies the JSON-LD xAPI Profile standard. The manual one needs to implement a data strategy implied by a JSON-LD document must be understood by, well, people. Not just academics or engineers, but like… anyone… that would have to actually have to make decisions based on a standard, knowing it kinda has to work more like how USB or lightbulbs work as standards than, say… well, SCORM. As a result, there will be a 9274.2.2 activity starting very soon to figure out what all has to go in that documentation.

If you want to work with xAPI in a way that’s easy enough just by “reading the instructions,” Get involved. Contact me to get notified and reminded for these (will-be) monthly discussions. 🙂

Next post? Maybe it’s time to look at a little bit of question-storming that’s going to result in some learning analytics strategy.

Filed Under: Learning Analytics Tagged With: elsevier, ieee, journals, learning analytics, praxis, publishing, research, solar, telemetry, xAPI Profile Server, xAPI Profiles

2020 was all About xAPI Profiles (for us)

January 19, 2021 By Aaron

stainless steel can with fire
stainless steel can with fire
Photo by Lisa Fotios on Pexels.com

A few days ago, Megan and I passed a milestone, with now seven years of MakingBetter. 2020 was all about xAPI Profiles (for us). It’s hard for me to grok how much we got done in 2020 considering the trash fire of a year it was.

Like many of you, Megan and I had to seriously change how we’ve managed our work/life balance, household, workflow(s). In this post, I’ll highlight what we shipped last year through MakingBetter.

The ADL xAPI Profile Server

In 2018, Megan and I drafted requirements for an xAPI Profile Server. Two years later, in 2020, MakingBetter built it with ADL.

We teamed with our besties at Veracity, keeping a learning standards team-up between Jono Poltrack and myself that spans almost two decades now, and keeping the running t-shirt jokes going between Megan and Tom Creighton. I digress.

To summarize, the ADL xAPI Profile Server is a revolutionary approach to governing data for xAPI-supporting applications. It’s an authoring tool for xAPI Profiles. It’s an RDF server, so it is processing all the semantic connections on the back-end; generating JSON-LD concepts while you author. It enables authoring tools, independent development environments to be dynamic and driven from the same authoritative document. Updating an xAPI Profile hosted on a server will make changes available to anything subscribing to that profile. Imagine managing taxonomies and ontologies for learning data that don’t necessarily require re-authoring.

Megan provided the vision and technical requirements that drove this effort, and led the technical development. Crystal Kubitzky did cracking (been watching a lot of Great British Bake-off) UX research and design, making full use of the USDS design system. Eva Rendle took on the unenviable task of managing the herculean effort to design, develop and deliver this product in 13 months’ time.

As a result, it represents that best of what can happen when everyone on a project is oriented to the same goal. Collective focus, talent and hope drove this project in 2020. The love our team put in that effort shines through when you use it.

It launches in Q2 2021. Watch me introduce and demo some of the features of the ADL xAPI Profile Server here.

xAPI Profile Authoring

MakingBetter worked with Veracity to produce a significant xAPI Profile. For the long-game on xAPI, we produced a profile with ADL to support the prototype development of the Total Learning Architecture (TLA). The profile expresses the TLA’s Master Object Model (MOM). In the coming weeks, expect posts to surface lessons learned about profile authoring and thoughts around enterprise learning analytics strategy inferred by this work.

For now, if you’re interested in reading more on the MOM and the TLA, read here.

More?

In the next post where I’ll get into what all I’m doing, with all the different hats I get to wear. After, with that bit of needed meta established, I can get into blogging about the really real nitty gritty of working with xAPI.

Filed Under: Open Source Tagged With: master object model, xAPI Profile Server, xAPI Profiles

Metrics in Learning Analytics

January 13, 2021 By Aaron

Before life in the United States was rudely interrupted (to put it extremely mildly) last week, I teased the concept of metrics in learning analytics. Where dimensions frame a problem space, metrics are (literally) the things I look at to model the problem space. In other words, metrics are how I make sense of an analysis.

let’s learn about metrics in learning analytics!

Problem Spaces need a model that describes the space (dimensions) that enable me to make sense of it, or “perform an analysis.”

More than counting.

My analysis is always going to be defined by my metrics. Often, I need to count or tally things, but COUNTS ARE NOT ANALYSIS. In presentations, I often talk about how an xAPI statement is an observation. I explicitly associate that a given metric will likely have an xAPI statement to support an observation (observations = metrics = xAPI statements).

Here’s an example of how I generally approach learning and job-task analysis (relating the learning to doing). While these dimensions of the learner, the learning activity, and the learning experience might make sense to a learning professional, it’s important to remember that these dimensions aren’t (and shouldn’t be) the only way to do learning analytics. I’m expressing a complex enough model that if 80% of readers copy and paste without more forethought, it hopefully will point them in better directions than they were going before reading (the whole “making better” thing).

“All models are wrong, but some are useful.”

Anyone putting together an learning analytics strategy is arbitrarily framing a box, more or less, in which learning happens. Only through the pinholes of such a theoretical box can learning be observed. Then, only by putting those observations together can an analysis be performed.

simple model of learning analytics dimensions: Learner by learning activity by learning experience
Pinholes in a theoretical closed box

So what should those pinholes be? This seems like a pretty straightforward question. Working with stakeholders to define metrics in learning analytics requires patience, planning and practice. I’ll share methods for getting to these metrics in the coming weeks.

Barring further interruptions, tomorrow I’ll review MakingBetter’s work over the past year. On Friday, expect some catch-up on what I have been up to, with Elsevier, ADL and IEEE. Lots. To. Share.

Filed Under: Learning Analytics Tagged With: dimensions, learning analytics, metrics, strategy, xAPI

Dimensions in Learning Analytics

January 5, 2021 By Aaron

simple model of learning analytics dimensions: Learner by learning activity by learning experience

Dimensions, in learning analytics, are the ways we describe the (theoretical) space in which we’ll analyze… learning. I think of a given xAPI Profile as the technical instructions I want tools, systems and services to calibrate on for the way I need to make sense of the data. Yesterday, I shared a demo given of the forthcoming ADL xAPI Profile Server. Today, I step back to share a mental picture of how learning analytics can be framed.

In 2019, I collaborated with Judy Katz on some ideas for visualizing learning analytics strategy for the xAPI Camp @ Learning Solutions 2019. Judy and I each had our takes at the time on how we might frame the “space,” but with almost two years to reflect and multiple opportunities to put those ideas into practice, it’s abundantly clear how we each labeled the dimensions was less important than how we organized and measured similarly.

Dimensions

A simple way to think about learning analytics is that when we want to analyze learning, we look for information about:

simple model of learning analytics dimensions: Learner by learning activity by learning experience
A rather simple model of the dimensions of learning analytics.
  • The Learner,
  • The Learning Activity, and
  • The Learning Experience

… and all of these analyzed over Time as a constant means of comparison.

Dimensions tend to stay affixed, once set. The trajectories along which we measure things likely will remain the same. Much investment of time and human capital is built on this framing, so before anyone codes anything, this model should be treated as something that can be revised, even tossed out, in favor of something better. The investment of time and effort in planning is minimal, no matter how long the process takes, compared to the costs of implementing the wrong learning analytics strategy.

Metrics

Along each of these dimensions, I’d identify metrics. Metrics help to understand the dimension of learning I’m analyzing. For example, I might break a Learner dimension down into the type of Learner, or user, anticipated by the learning experience. If I’m developing a learning solution for practicing nurses, the “Learner” likely includes the “nurse” but may also include other roles, like a Nurse Preceptor or a Nurse Educator. A dimension like “Learner” should account for every type of participant.

Tomorrow, I’ll start diving into the ways I might break down those metrics more thoughtfully, related to the Learning Activity vs. the Learning Experience.

xAPI-Camp-Learning-Solutions-2019-Learning-Analytics-StrategyDownload

Filed Under: Learning Analytics Tagged With: dimensions, learning analytics, metrics, strategy, xAPI Profiles

Level Up Learning Data with xAPI in 2021

January 4, 2021 By Aaron

You’re likely going to be reading up seriously on how to work with learning data sometime in 2021. This will be true regardless of whether you work in online K-12, Higher Education or professional training — anywhere in the world. If you will “own” xAPI know-how on your team (or maybe for the whole organization), I want to help you.

Megan and I have certainly been in your shoes.

So, I made a resolution for 2021 – I’m going to try and blog daily again like I did maybe 15 years ago about Flash For eLearning. It’s time for us all to level our data game up.

First, let’s start with this webinar (above), where I recently demonstrated a new tool coming in Q2 from ADL, the xAPI Profile Server we produced for ADL with our friends from Veracity. In the video, I explain what this is, what problems it’s going to help with and how you’ll likely put it to work. Spoiler: it’s a tool to help you author and serve xAPI Profiles, doing a lot of the heavy lifting of complex rules thingies for you.

Next, I’ll blog about how to frame a problem space to apply learning analytics with dimensions.

Filed Under: Experience API, Learning Analytics Tagged With: demos, recordings, webinars, xAPI, xAPI Profile Server, xAPI Profiles

The One-Year Countdown to an xAPI Profile Server

November 18, 2019 By Aaron

tl;dr: Wednesday, 20 November at 1pm EST, the Advanced Distributed Learning (ADL) Initiative will host a webinar on an xAPI Profile Publishing Tool & Server being developed for 2020. 

Since 2017, Megan & I have been pretty heads-down, laser-focused on doing what we can to harden our best practices to work with xAPI to further support ADL’s mission. We researched and documented conformance requirements for learning record stores (LRSs). We gathered best practices and worked with deep subject matter experts to develop a companion spec to xAPI (xAPI Profiles). We gathered the requirements for an xAPI Profile Server capable of publishing, validating and serving (via an API) xAPI Profiles. Now, thanks to ADL, we’re designing and developing a reference xAPI Profile Server with our friends (and former colleagues) at Veracity Technology Consultants!

How Will an xAPI Profile Help L&D Capabilities Scale?

Depending on how removed you are from touching anything that looks like code, if you’re reading this, you’re likely to have some idea of how much non-value-added work goes into how  L&D content and services are delivered today. Software integrations are difficult and almost always custom. Testing eLearning in LMSs (specifically SCORM or AICC) is manual and laborious. Reporting anything other than completions and scores from your LMS can seem impossible. Even if you’re managing content, you’re probably not managing what given learning content tracks, let alone reports to the LMS. This state of eLearning has stagnated over 10 years, which resulted in critical processes that must still be rigid (and often fragile) to author, publish, test, deliver and revise eLearning content everywhere.

xAPI Profiles make it possible to exercise some control for the quality and consistency of learning data generated by related learning activities. xAPI Profiles as specified today directly address how verbs, activity types, attachment types and extensions can be governed by xAPI Profiles. The xAPI Profile Server will provide a wizard-like interface to make authoring and publishing. The xAPI Profile Server will support publishing of the JSON-LD xAPI Profile so applicable technical rules that should govern any subscribing learning experience can be managed together with one tool. Users will be able to validate their xAPI Profiles. The application will serve xAPI Profiles via an API enabling workflows integrations with the means to revise and push changes in xAPI Profiles to subscribing systems and services.

On Wednesday, 20 November, Jason Haag of Veracity Technical Consultants and I will co-present on the work our teams are working on together. We’ll talk about some ins and outs of what you can do with xAPI Profiles, and some more details on what to expect from the xAPI Profile Server. Sign up for the webinar: https://attendee.gotowebinar.com/register/7745952068399329549

Filed Under: Experience API

xAPI Camp Preview: dominKnow ONE

March 19, 2019 By Aaron

On March 26, 2019, I’m gathering best examples of the tools that enable, and the professionals who define xAPI’s best practices at xAPI Camp (number 14!) hosted by the eLearning Guild at the Learning Solutions conference in Orlando, FL. In the run-up to the event, I’m highlighting these industry colleagues, and why I think they’re enabling and defining best practices. Today’s post focuses on what dominKnow is doing for savvy instructional designers with their ONE platform, with different authoring modes for different types of content authoring.

If you have some questions about the learner, the learning experience or the efficacy of the learning program, there’s some impressive flexibility in dominKnow ONE I want to point out.

About that Competency Dashboard in RISC…

In my last post, I shared a competency reporting dashboard in RISC’s VTA. RISC’s dashboard (like many useful dashboards) relies on specific, well-formed data. Two companies offer such flexibility to customize what and how you track learners engaging with their authoring tools: Trivantis and dominKnow.

For, seriously, a little extra effort, generating custom statements that conform with profiles or just follow best practices offers so many more valuable insights. In what’s been demonstrated previously, much of the workflow to track to competencies or learning objectives is even automated, meaning as a content author, you wouldn’t have to “code” so much as keep content organized… which you’d want to do anyway as just good instructional design. dominKnow ONE by default, and with no programming, provides a hierarchy and tracking of Course > Module > Learning Object, in which the “Learning Object” contains the content and interactions that, together, would support meeting a particular learning objective or competency. When a learner satisfies the requirements to complete the learning object, the dashboard will show that learner has met the competency requirement(s).

Taking it to the next level and optimizing an individual’s learning

You probably already know that xAPI allows us to track all sorts of information, including the aforementioned objectives/competencies. The data can be leveraged by content, though — a capability that many rapid authoring tools just don’t leverage. At next week’s xAPI Camp at Learning Solutions, dominKnow will demonstrate how you can track author-defined competencies in your content and then use the xAPI data you tracked to dynamically personalize the learner’s content. If a learner has already demonstrated/completed a given competency, why should they be forced to re-demonstrate it or an author need to create unique learning content for each user use case? Not only is it cool — it’s interoperable and dominKnow will demonstrate that while doing this requires instructional design skills, no coding skills are required.

Why is this good?

Out of the box, dominKnow is making it easy to organize content without locking you into this default way of organizing your content, leveraging that to personalize your learning, and enabling you to more easily tie it to your company’s objectives. That’s exactly the flexibility and ease I want to track and report out with xAPI. For most content I need to work with, the 1:1 Learning Object:Competency Objective relationship works, but if I have something more complex, I can take dominKnow ONE down the rabbit hole. As much of xAPI’s best practices require tinkering, the more flexible tools I have available to me, the more use I have for them.

Filed Under: Content Strategy, Experience API, Learning Analytics

xAPI Camp Preview: RISC VTA

March 12, 2019 By Aaron

On March 26, 2019, I’m gathering some of the best tools that enable, and the professionals who define, xAPI’s best practices. This will happen at the upcoming xAPI Camp (number 14!) hosted by the eLearning Guild at the Learning Solutions conference in Orlando, FL. In the run-up to the event, I aim to highlight who’s going to be there, and why I think they’re enabling and defining best practices. Today’s post focuses on the outstanding capabilities RISC has built into its Virtual Training Assistant (VTA) learning management system that enable flexible training approaches, like spaced learning.

When Knowing Matters

A question I get asked regularly is “when is xAPI really worth doing?” and my answer is always something like “xAPI is worth doing when the stakes are high and the cost of failure is too heavy.” When it comes to safety and health — when lives are at stake — that’s when a deep analysis with xAPI is worth the effort. RISC’s VTA is geared for high-stakes compliance needs. If your workforce is on a gas or oil pipeline and you care that they can do the job and not hurt themselves or others in the process, then knowing the capabilities of a given worker and the team(s) they’re on matters.

My funk soul brother Duncan Welder goes into detail on how RISC approaches tracking competencies in VTA here. It’s worth the read and to see the dashboard in action, but I can tell a lot about what they’re doing in terms of how they’re using xAPI by what I see here.

Why is this good?

Checklists, Competency Assessments, Questions, Pass/Fail activities — these are different types of xAPI activities VTA is reporting on. Using xAPI to also detail incident reporting makes it possible to juxtapose the learning activities with real-world outcomes to demonstrate the correlations between learning performance and job performance. There is so much depth we can get into with this established, as RISC can continue to get more granular with the progress toward specific competencies and the incident trends over time.

That’s exactly what I want to track and report out with xAPI. From what I’ve seen in the market, there are some really good LRSs providing solid reporting on data when there’s no assumption about what’s actually in the data. RISC, however, has a history of designing reports that tells specific stories about specific learning activities, like their PDF Annotator and Message Board features. What RISC can report may not be for everyone, but if you want specific insights, there’s no other vendor on the market that’s able to give you these insights with this fidelity.

Andrew Downes, in his stunning blog series documenting just what exactly customers are tracking with Watershed, breaks down a high level categorization of learning analytics distinguishing the learner, the learning experience and the learning program. Downes cites that Wateshed customers learning analytics are about the learning program, and ery little attention is tracked by Watershed customers about the learning experience or the learners themselves. In that light, it’s even more impressive to me that RISC is tracking competencies in this way because performance-to-competency is a dimension that sheds light not just on the learning program but the learner and the learning experience, too.

How does this work with “spaced learning?”

When Duncan and Art Werkenthin present at xAPI Camp on March 26, they’re going to focus on spaced learning — the delivery of small, repetitive pieces of learning content with breaks between the learner’s sessions with the content. It’s a researched approach to learning content delivery that is particularly effective for long-term retention. There are a lot of technical things RISC is doing with xAPI (like relying on cmi5 for reliably handling the launch of xAPI-enabled learning content). VTA’s support for cmi5 reduces the variability of the learning experience and technically enables a spaced learning approach (among enabling lots of other learning experiences). The reality is that RISC’s implementation of cmi5 makes a lot of things possible where we need to know who the learner is, and we want to track learning activity to an LMS.

And, with RISC able to track a spaced learning experience, they can look at the data on the content delivered over time, against the performance outcomes (like incidents reported) and optimize the spacing relative to the type of content, the subject of the content, the complexity of the job performance, etc. VTA clearly gives you all the tools you need to do this. All you need is content that uses xAPI in specific ways to populate these reports with data that is formatted to deliver insights. After all, I’m highlighting what RISC can report, but that requires data that is useful and usable to populate these reports with appropriately formatted data.

For that, I can’t wait to share my next blog post on what DominKnow is doing that enables what RISC (and other tools) can report on.

Interested in xAPI visualizations but can’t make xAPI Camp?  Duncan and Art have a session Wednesday, March 27th at Learning Solutions on getting measurable results with xAPI titled Design with the End In Mind.

Filed Under: Experience API, Learning Analytics Tagged With: best practice, competencies, xAPI

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Footer

Sign up for email updates

Learn about xAPI

  • About xAPI
  • Adaptivity
  • Authoring Tools
  • Case Studies
  • cmi5
  • Content Strategy
  • Data Strategy
  • Learning Analytics
  • xAPI Statements

Learn More

Upcoming xAPI Camps!

Search

  • Home
  • Connections Forum
  • Investigating Performance
  • xAPI Quarterly
  • xAPI Camp
  • How We Can Help
  • Case Studies
  • Learning Tech Consulting
  • Talk to Us!
  • News

Copyright ©2014-2022, MakingBetter d/b/a Bowe & Silvers Partners LLC.

 

Loading Comments...