Today, I’d like to put a spotlight on a small but important concept in doing analysis (of any kind, I suppose) of changes. Usually, when I’m doing some self-reflection or analysis; and especially as I work with normalized (and sometimes unstructured) data, it can be a frustrating sequence of going down rabbit holes to try and pull understanding from a change in information. When I talk about analyzing things for deltas, I’m looking at the change in value. Before I analyze things for deltas, though, I’m looking for changes in the “shape” of the information (or data) — which are lambdas. Thanks to COVID variants, I can’t figure out if my short-hand is aligned with how other architects’ conventions. What’s important is that one is for the value, the other is for the shape.
Why Lambdas First?
Often times working with huge amounts of data that are largely uniform in structure, I don’t risk starting the work of figuring out the delta for something until I know how much it’s worth doing. If the shape of the data is different from what it used to be, that’s where I start to prioritize doing processing. If I’m looking at different variables, I look for where there’s the most change happening, where some decisions with one variable are dependent on the resolution or outcome of another variable, I put them in order like dominos and let them fall into place naturally as I tackle things at a starting place that accelerates through unknowns, turning them into knowns.
When the shape of the data is different (bandwidth, size, length, timestamp, etc), it tells me there are new values to look at.
Lambdas, the S3 Kind
So let’s say you send all your #xAPI data to an LRS. When you want to analyze it, you export the JSON out of your LRS and… into an S3 bucket, as a .json file. In the olden times, this would make that S3 bucket just file storage. Except the only files we care about here are .json files, which means we know something about these files and they are pretty easy to consume (if you’re a machine).
Amazon has a service layer atop of Simple Storage Service (S3) where we can add our own code to GET, LIST and HEAD requests to modify and process data as it is returned to an application. So, as you automate getting data out of the LRS and into S3 for analysis, as the file is moving into the S3 bucket, these Lambda functions perform all sorts of operations based on the shape of the data. Not dissimilar to how xAPI Profiles works with defining determining factors and patterns, it stands to reason that there’s a role for S3 Object Lambdas in realizing the benefits of the rules established within an xAPI Profile.
Lambdas, the Self-Work Kind
I wrote a bit ago about coming to terms with being autistic, and given how overwhelming re-processing an entire life filled with moments of “not having all the info” about my own self, I hit a point in my self-work where I was exhausted trying to figure out what I should learn from every random rabbit-hole I’d go down. In order to heal, I recognized that if I didn’t actively assert some management of (waves hands) all the things, I’d be spinning out a lot — with every rabbit hole analysis, I could come out with insights and epiphanies, but they could be invalidated with new information from another rabbit hole. Getting a handle on the tech debt of unresolved trauma, finding or forcing some closure on issues that needed to be closed, reconciling past states of thinking and behavior with new information… understanding the shape of changes helped me prioritize and stack self-work up so that insights begat net-constructive new insights without having to do more rework on stuff I just processed.
I look at the state of knowing at a point in time in my life, and look at the shape of the information later, in a new situation. Then compare, version, update and identify some downstream impacts of the new information.
So, like, “if I knew back when I was a kid…”
“If my parents knew/understood when I was a child…”