Artificial intelligence has emerged as a tool for organizations to gather deeper insights into their business data. In this conversation with Neha Gupta, True Office Learning CEO, she shares her thoughts on how AI can be weaved into compliance successfully.
This interview has been edited for clarity.
How are compliance training methods evolving to embrace digital technologies?
Neha Gupta, True Office Learning CEO: In a lot of ways, compliance training has been kind of stuck in the ‘80s ever since PowerPoint and video came out. It just inherently kind of got stuck there with some version of narrated moving PowerPoint. People are still trying to somehow dress that up with some better pictures, better visuals, slightly newer videos—but still kind of take a very one-size-fits-all traditional medium.
We need to make that experience very aligned with every modern-day experience we have, which is all about the user. If you think about how compliance training historically has been designed, it's all about whatever information the company wants to present, and it's all kind of thrown at the learner as sort of a textbook: “This is what we want to teach you. Here's a bunch of information that we're just presenting.” The fact that you click “Next” through it means you saw it, and we've done our job right.
The biggest shift from that is really changing the lens to understanding who you are as a learner—whether it's the DOJ's new guidance, whether it's how organizations are starting to think about their programs—who is the learner? What is their risk profile? What is most relevant to them? And also, what is their ongoing performance? What do they already know? Then, most importantly, how effective is it for that learner to actually be able to apply the information they're learning rather than just check whether they received a copy of the textbook or not.
The other big trend that's an industry shift in compliance training is scale. Historically, you had a compliance training course that you built, and you would roll out the same compliance training course for 3-5 years until there was sort of anarchy in the organization about people refusing to take the same course again. Then you changed out the pictures, changed out the audio, kind of dressed it up a little bit and tried to go for a different flavor, a different shade.
Now, innovation is really shifting that, meaning from just being decorative components to actually more innovative learning design, improving the actual models with which we teach people the stories and the applications. It’s really evolving with everything from use of adaptive learning algorithms to artificial intelligence to machine learning. Across the board, there's an actual use of technology innovation in compliance training.
What are your thoughts on the relationship that artificial intelligence has with compliance training?
The premise of artificial intelligence is data. Unless you are capturing and utilizing data effectively, you cannot make a system artificially intelligent. The whole point of the fact that it's not natural intelligence, it's artificial intelligence, is that it needs something to go off of. So ... as more of compliance training becomes software, artificial intelligence is that next natural step in the journey.
Today, most organizations are probably talking about artificial intelligence as an outside concept. As in, “Is it ethical to use artificial intelligence? Where can we use artificial intelligence and our products?” A lot of people don't realize they are already using some level of artificial intelligence. There's a lot of recruiting software, there's a lot of process software that already has a lot of these algorithms baked in.
AI is the natural next step in making the world more learner-centric. You cannot keep remembering every learner’s history, journey, profile, and knowledge by human effort or even some sort of a catalog or tagging system because the scale of that just becomes insane. So artificial intelligence is, without question, the way to keep delivering a learner-centric experience to the learner over time and to help the system get smarter on who the learner is, what they know, and what they struggle with. AI guides how to use that to really make sure that they keep beating the forgetting curve, that they keep getting nudges and booster shots at the right time, that they're rewarded for the stuff they already know really well.
AI is probably more of a buzzword right now for most compliance professionals. They don't quite fully necessarily understand it, and if they've been delving into it, it has been more to determine the ethical implications of using it from a product perspective.
The beauty of using AI in compliance training is that you're not actually using it to make any negative decisions about the learner. If anything, you're using it to reward the learner by really giving them only what they need, rather than kind of throwing the kitchen sink at them.
So while it doesn't feel like an instinctive place to use AI, it is actually a very impactful and sort of foolproof way to use AI. From an end user perspective, it magnifies the benefits you would get from an adaptive learning experience and also the user experience overall because now, as a person, I can feel respected that you know all my interactions with the system are adding up to something and that there's a value to those interactions, other than just showing that I've done what I needed to do.
Given the current environment, are there any changes to the ethical implementations of artificial intelligence?
I think the biggest shift in the implications with the COVID-19 pandemic is that we're going to see AI adoption accelerate, much like all digital adoption. People are going to have to manage more processes in more diverse ways than they did before.
For example, if we were recruiting for a position three months ago, I would've said that position is New York City-based only. Today, I'm not so sure that makes sense anymore. That means my candidate volume is going to go up 10x. But now, to sift through those candidates—either I'm going to have to spend the time, or I'm going to have to use technology to get smarter about it. Now I need a more efficient way to do something I would have done differently before.
That's a very simple example, but you're going to find the same thing in identifying which policies are out of date. Say you went ahead and changed a policy today on workplace travel. Now, you probably still have a bunch of policies, client entertainment, and other things that you could do from an expense perspective, and so on that also need to be updated downstream because you had this massive shift to your travel policy.
These are places where AI and a lot of the technology landscape can really shift that for you, rather than you having to manually go through that journey. If you're using these more sophisticated systems, a lot of those triggers and next steps can [either] happen automatically or at least can be flagged for you so you're not on a fact-finding mission of figuring out where all you should be looking and which rocks to turn over.
I think we're going to see a pretty high degree of acceleration and some really interesting applications come out to help people do jobs that were previously just simpler, by definition.