Much like any content or media with which we engage users, with a survey we need to prioritize more than just the survey results; we should be improving the survey and the survey experience. Here’s how I approach it.
In my case, I’m consulting on the analytics strategy for a survey that acts as sort of an evaluation of a nurse’s career health (one use case) while often serving as another sort of self-assessment for a healthcare provider on how supportive the clinical work environment is for new nurses. I work on a matrixed team, which means people-management and project-management are independent. This means we require ceremonies every two weeks in sprints that keep us talking with each other about the work we’re doing. This system works really well to scale to lots of different concurrent ongoing efforts, but starting anything new takes new coordination, and especially with new people taking product and design roles on our team, there’s plenty of stuff to tackle.
So while Product and Design work out the user experience for the survey, and how to access the reports, while also figuring out what skills and competencies (and through which, content) each question might support, I’m doing what I can to expedite things, and if there’s one thing I know about, it’s online surveys. So before I get into any discussion at all about xAPI and statements and tech stuff, let’s just first talk about what can we get out of the interaction data with a survey.
What’s valuable to collect from a survey beyond results?
The survey we’re producing is like a trade standard, and it’s well over 60 questions. The questions are, in total, only four different types:
- Multiple-Select
- Multiple-Choice
- Fill-In
- Likert
Without any UX prototypes yet, I can still make some pretty, pretty safe assumptions about the survey experience just knowing what i know already — enough to start defining data requirements but not enough to detail the analytics potential of this dataset . It’s not enough to work off someone’s data requirements because while I absolutely must meet someone’s explicit requirements, too many stakeholders simply lack the inquiry muscles and sometimes the abstract math thinking needed to understand just what’s possible to know from a pretty tight data set. My hope is that in this broad case study, I can model some thinking that might help you on your next analytics project.
For my survey, I know that it’s a long survey that’s going to be split over pages
- this means it’s going to have some kind of navigation
- this means users will likely need to save information and come back to retrieve their place, with all the responses already filled in still filled in.
- this means we’ll need to account for different sessions with the same survey activity
- when an activity begins
- when a session with the activity starts
- what a user interacts with and how
- survey questions
- navigation elements
- calls-to-action
- when a user leaves the activity
- when a user returns to the activity to continue the session
- when a user resets their survey to start again
- when a user completes their session
- this means we’ll want to record state information
- for every survey item which a user responded, the response must be stored
- for any navigation, the current screen must be stored for recall
Analytics Potential
As I wrote before, I have a partner in UX who’s taking responsibility for identifying the requirements for the end-user experience, so there are still requirements to surface; right now, though, I have an idea how I’m going to wire this because of the assertions I just made above.
This project likely requires capabilities to
- slice and aggregate a single user’s experience to complete the survey across multiple sessions with It
- store and recall the state of the survey between sessions for the end user to complete the survey, including bookmarking and come back to where the user left off
- compare different user experiences in aggregates
In order to realize the analytics potential of the survey, I still need to know a few things that will come from the design of the user experience, and the product requirements for the content, learning objectives, skill and/or competencies that I’ll reference in the data to fuel analytics.
Anyone who’s familiar with eLearning already sees the potential to use cmi5 to address a lot of the things above, and while I generally think it’s good this early on, even with so few actual requirements, to have an idea of how to do the job, it’s important with so little known not to get locked into an idea of how to get the whole thing done.
For now, it’s enough to manage the unknowns by nailing down questions we’ll want to get answered by partners in product and ux to be able to identify the analytics potential of this effort.
My questions so far are
Psychometric
- What values do we need the likert and other choices in survey questions to be worth in order to support the differentiation in reporting that helps insights stand out?
Product (And Content)
- What content, media, criterion or competency sets should be associated with a given survey item?
- How will this survey be maintained over time?
- What do our customer stakeholders need from the reporting of the survey results?
Design
- How will users engage with and complete the survey?
- How will users engage with the reporting?
- How will we plan to improve the survey and/or the survey experience over time?
Next Steps
Once I get some more info to address my open questions, I’ll be able to share how I use that information to ideate on the analytics potential, resulting in a table of requirements mapped to use cases for analytics 🙂
Related