Skip content

Responsible AI, real advantage: Why ISO 42001 matters now

4th June - 35 minutes

In this episode of Future in Focus, Xavier Francis is joined by Kevin Franklin, Chief Product Officer at LRQA, to explore the pivotal role of AI governance in today’s rapidly evolving digital landscape. Together, they discuss how ISO 42001 offers a timely and practical framework for managing AI-related risks, building trust and staying ahead of global regulation. With insights drawn from Kevin’s extensive background in systems and standards, the conversation highlights why responsible AI deployment is essential for organisations of every size and why ISO 42001 is set to become a foundational tool in doing it right. 

Follow us on Spotify

LRQA: The Future in Focus

Hello everyone, and to all of our listeners worldwide, welcome to LRQA's Future in Focus podcast. My name is Xavier Francis, and it's my pleasure to host this episode for you. Today I'm joined by a very special guest, Kevin Franklin, Chief Product Officer at LRQA. And in this episode, AI, risk and what comes next, Why ISO 42001 Matters now, we'll explore the growing importance of AI governance and how ISO 42001 provides a practical framework for organizations to manage risk, build trust, and prepare for future regulation. Kevin will share why ISO 42001 is a game changer for businesses looking to lead with confidence and how LRQA is helping organizations take a responsible risk-based approach to AI. Kevin, welcome to the show. Glad to have you on the Future in Focus podcast. Thank you so much for joining us. How are you doing today?

I'm fantastic. Thank you so much for having me on the show. Really looking forward to a fantastic conversation.

Absolutely. This is really top of mind, really pertinent in our world today and really looking forward to hearing what you have to say about it. Now, before we get started, can you give us a little bit of background on who you are and what your journey is into AI and 42001?

Absolutely, with pleasure. My current role as you shared is Chief Product Officer for LRQA, and I also lead one of our strategic growth initiatives called EIQ, which is a supply chain intelligence platform. From a background perspective, I worked in systems really for the last 20 to 25 years. My PhD was focused on soft systems theory. I've done a lot of work in the management system space, trained auditor to 14001, SA8000, 1464, AA 1000. So a range of different management system standards and related. I have also, of course, been very actively involved in the product development process at LRQA and I'm a very keen user of AI and of course the governance of AI both within EIQ our own systems and implementing that in a fair and responsible way for our clients.

Well, that's great to hear. A lot of experience, a lot of experience. Really appreciate having you on today. Absolutely. And what's wonderful is we started with AI early at Core Business Solutions. We've been looking at it for a long time and I'm really anxious to hear what Kevin has to say about 42001 and some of the governance that brings. Well, fabulous. Let's get started with some of our questions and we're going to start with: AI has such an important role in companies now. Why does there need to be governance?

That's a great question, and also very timely, both in terms of the AI part of the question and the governance part of the question. I think most of our listeners will be using and dabbling with AI themselves, whether it's one of the varied generative AI tools out there or actually just embedded into many of the applications that they're experiencing every day or in their own companies. But you know what is increasingly clear to pretty much everyone is that AI is not just a tool. But it's a strategic business imperative and it is a central part of really what will shape an efficient and high-performing business moving forward. Most businesses today will be looking at how they can leverage and use AI to drive additional value for customers to drive efficiency into their organizations. And to get competitive advantage in what is an increasingly hyper-automated economy. So, part one is really the strategic imperative.

Yeah, I know we've been using it and the efficiency part is just amazing about how just using the generative AI to write a better paragraph, to do a little research for you, to make it, you know, more efficient for me in podcasts, do my podcast and it's nice to have it look at the transcription and give me a description. I don't have to listen to it again and take notes. It can do it for me. So, I can imagine it's across the board, it's just so many people are using it.

Right, and you've kind of taken us into why this is so important, which is, you know, as you're starting to use AI yourself, and I'm sure many of our business listeners will feel the same, there are risks that start to become apparent, right? It does hallucinate, the way some of the algorithm’s work can lead to false recommendations. What it may put forward as a potential script for a podcast like this may not actually reflect the reality. So, part one is the strategic imperative, the business need, the efficiencies, automation, everything. Part two now starts to lean into the governance question, risk and getting in front of the risk question around AI, which is something we haven't really had tools to do and support moving forward, and ISO 42001 definitely supports with that.

And then there's the whole regulatory piece. We're also seeing a lot more regulation coming up in the AI space, particularly the European Union AI Act, which is the first really big piece of regulation, but there are many other countries and stock exchanges, etc. that are pushing forward similar initiatives, so we expect to see a lot more there. Then stakeholders, clients, customers, other users are starting to ask questions around the use of AI, how well the respective tools have been designed, developed, and are being rolled out. Are people being unfairly discriminated against in any use of AI, etc.? So strategic imperative, risk and getting in front of that question, managing regulations, and also getting in front of stakeholder pressure, four really important and compelling reasons why we should be doing this now.

What exactly is ISO 42001? And what makes it different from the other ISO standards? Obviously, here, we all know what 9001 is, that's your standard quality management system, 14001, environmental, etc. How do you govern AI and what is 42001 all about?

Yeah, also a very good question. So yes, you rightly pointed out some of the existing well-known standards, 9001, 45001, 14401, etc. And actually, 42001 definitely has a number of similarities in that the core of the standard follows the same sort of 10 clause structure going through from scope, terms and definitions, leadership, context, etc., performance evaluation, improvements. So, the kind of PDCA, plan, do, check, act, cycle is very much embedded into 42001 with AI as a management systems framework.

But it is also, and this is really important, linked to a set of principles and then a set of annexes. So the key driving force behind 42001 is really a set of principles linked to transparency, accountability, fairness, explainability, data privacy, and reliability. In addition to the core PDCA framework, we've got these principles that sit alongside it, and then in particular, we have four annexes at the end of the standard.

Annex A is around controls for responsible AI development, deployment, use and monitoring, which lean into things like policies, roles, resources, life cycle management, impact assessment. Annex B is further detailed guidance on Annex A. Annex C is on primary risk sources related to organizational AI implementation. And then Annex D is on standards applicable to specific domains and sectors. These annexes and the principles are really important because they give us a mechanism of looking at AI as a dynamic, evolving system rather than a static, single point-in-time exercise.

That's a really good point. So it's much more holistic. It looks at the life cycle of AI within an organization, from data sourcing to algorithm design, deployment and ongoing monitoring. And then it embeds those principles to add that kind of ethical consideration into AI, bridging the gap between innovation and responsible use of AI.

Exactly. But ultimately, these annexes and the AI-centric controls that are in the annex system give you a really good framework for taking a standard management systems PDCA and then connecting that to a logic that is relevant for AI.

Gotcha, gotcha. And that really makes a lot of sense, all of the ISO, which is the one that everything's based on for ISO, broadening that out but also making it very applicable to what AI is doing. Alright, so Kevin, who is ISO 42001 really for? Isn't it just something for big tech firms and data scientists and people that really need to pay attention and they're utilising it in a way that it's a part of everything that they do or use? What about small companies and things like that?

Yeah, I love this question, and it's easy to think that AI is just for big tech firms and data scientists and therefore, so is 42001. But the reality is actually 42001 is applicable for most businesses. In fact, many businesses will be using AI, whether you're a local clinic using it for diagnosis or whether you're a retailer deploying chatbots. If you're working with data, your own data, or applying someone else's AI system on top of your own data, 42001 is relevant.

It's relevant therefore for small businesses, large businesses, across a range of different sectors. It's relevant for businesses looking to get that competitive advantage, the strategic lever that comes with AI, that are looking to build trust with consumers and partners, that have online applications and tools. And therefore it can crosscut many sectors like agriculture, education, public services, all of which will increasingly rely on AI decisions in some way, shape, and form. And that will affect the livelihoods of their clients and wider society.

X, you even said at the beginning, you may use AI yourself in script generation, and that's just another example of where actually you might want to consider whether it's relevant for even an organization like yours.

Well, and also for me, I've had issues where it has hallucinated and I'm asking a question on a specific, we're doing a podcast on a specific clause in the standard, and I'm like, that's not the right clause. It's like, oh, sorry, you're right, pardon me, let me get it right. And then I got the right one. But if I had not known that it was the incorrect clause, just think of the rabbit hole that could have led me down. And that's just one really small example. If you're dealing with agriculture, education, something that's making a decision on somebody's life, and it hallucinates, that's a big concern. And how do we now try to mitigate that risk? And that's one of the reasons 42001 is out there.

OK, so we talked a little bit about how 42001’s not just for big tech and data scientists, but what about when you're an organization that's not developing AI but you're just using tools like ChatGPT for me in my world, they can make video, they can make pictures and things like that, but I'm not developing that. What about those types of things? How is 42001 going to help us in that?

It's still relevant, absolutely. So 42001, yes, would be useful for people that are using third-party, pre-existing AI tools, whether it's ChatGPT or CoPilot or any other tools which they may be taking off the shelf and then training and applying to their own data, to their own systems, to their own day-to-day decision making.

When you use a tool like that, you're using it for a specific purpose, a purpose to drive efficiency and a purpose to improve decision making, a purpose to provide a better diagnosis, and that's exactly where you want to be asking questions around the validity of that purpose. So firstly, you're right, it's not just for those large organizations. It's also relevant if you're using that in your own business, using existing AI tools.

The regulatory environment around AI is increasingly starting to shift some of the liability from the developers to the users. So what that means is, as a user of AI, potentially organizations like our own, we need to think carefully about how we're deploying it, what data we're training it on. Even if it's not an AI we built ourselves, not a large language model that we built ourselves, we still need to think carefully about the application.

And that is something that we expect to see really scaling over the next few years as more and more regulation is developed. There will be more and more focus on not just the developers but also the users, the full end-to-end AI supply chain, and ensuring that its application is done in a way that's fair, robust, ethical, and responsible.

Yeah, I think from a responsibility standpoint, I mean, if you're using specific data that could be of your customers or employees and things like that, you have to be so careful. You know, I'm thinking about from a healthcare standpoint. In America, we have HIPAA, where you're not allowed to share any information and it has to be generalized even to where they won't use people's names sometimes and things, they just get a number. I mean, could you imagine if you were giving it data that way? Where does that data go? Where is it processed? What is that being used to help develop the AI as well as answer other people's questions? That can really again go down a big rabbit hole that way.

100%. Similar in Europe where you have GDPR requirements and there's a lot of other great regulation out there around personal information. Again, you have to be very careful around how you apply AI on top of that information. And then you have to have the right security controls in place to ensure that that information doesn't get out of the system into the wrong environments. So all of this is really enforced and structured through how 42001 works, really giving you the confidence as a business, giving regulators as well as your consumers and clients the confidence, that it's fairly and securely designed, developed and deployed.

Awesome. So Kevin, we've talked about AI, we've talked about 42001 and how that can help rein in some of the things that we need to deal with when we're dealing with AI. How can 42001 help navigate the evolving AI regulatory landscape? I know you've talked about it a little bit before, Kevin, but I mean, this is something that's not going away and clearly we're going to become more and more aware of the risk and the dangers of AI. How can 42001 help us navigate when rules come down the line?

Yeah, great question, X, and the regulatory environment around AI is very fast-moving. We did touch briefly before on the fact that the EU AI Act is the world's first fully comprehensive AI-specific legal framework, launched last year in 2024 and with phased implementation through 2025 and beyond.

The EU AI Act is also extra-territorial in its nature. So, it influences AI governance and regulation globally. And there are a lot of other regulations evolving as well. Increasingly we're seeing a lot of thinking in the United States, there's a lot happening in Asia Pacific, and probably around 60 countries in total are now developing AI-specific regulation or policy frameworks, all bringing a different but also similar approach to how they're looking at AI.

And what I mean by different but also similar is that there are some underlying principles that are very common across a lot of those regulatory environments. And they are the same sorts of principles that we're seeing within 42001. They want to see a framework, a structure, a logic like the PDCA structure being implemented. They want to see principles like transparency, accountability, fairness, etc. being embedded, which we also have been a driving force behind 42001.

And they're looking for the identification of really relevant aspects of AI that might impact the business, and the identification of that in a structured and standardised way, which is exactly what we get with the controls from 42001.

So, 42001 is actually well-designed in that it can keep up with a lot of the evolving regulatory landscape. It functions as a kind of living framework that can move alongside regulatory changes. Because it's sufficiently standardised and broad, if you're a business and you roll out something like 42001 today, you'll easily be able to layer in any new regulatory requirements over time, into the control environment or into how you're looking at and designing this certification and this system in your business.

It’s also, as I said, something that aligns with the kind of cross-border requirements of many of the AI regulations.

Yeah, I guess you can look at ISO, it isn't new to this. They've been dealing with things like 14001, 45001 across the globe with all the regulations that can come with health and safety, can come with environmental. This is just another aspect. ISO knows what they're doing when it comes to how it’s different in every different country, but it's broad enough that it can really help guide you more than get caught up in specifics of that regulation.

Absolutely, absolutely right. And you know, like many other PDCA frameworks, 42001 has an emphasis on continuous improvement, impact assessment, etc., and does lean into the regulatory demands of the respective regions and geographies in which it's being implemented, just like most of these ISO frameworks already do. So, there's also that there as well, which means that it will constantly evolve and keep up with any new requirements that come into being.

Fabulous, fabulous. So, Kevin, we've talked about it a little bit before, but what is the role of ISO 42001 in building trust in AI? I mean, there are so many times I'm sure everybody who's used AI and is listening to this podcast has had AI fail them. Whether it was misinformation, whether it was hallucinating, whether it gave you something that you didn't ask for because you didn't prompt it correctly, how can 42001 help build trust in using AI? Because that's one of the things you want from your standard, not just to keep you safe from any potential problems, not just to guide you in the best ethical ways to do it, but how can you trust it?

Yeah, very important question and you know, I think you're right, AI is a very exciting next chapter of the human experience. And we're using AI today for automatic decision making. We're using it to crunch massive amounts of data, for learning, for designing systems, for answering questions, for really making recommendations. There's so many different things that we're using AI for, for diagnosis, as we discussed before.

Now, ultimately when we look at 42001, and it's a really great question, there's still a big black box around AI. When we use it for all the things we just discussed, we're still not really sure how it works. You know, we've all read about large language models. We've all read about the different systems and how they work, but we still don't really know and understand.

42001 will help to explain some of these models, help to dismantle that black box to a degree, and that black box is at risk of breeding distrust. And 42001 will help reverse exactly that challenge.

In addition to opening up the black box, it will also better embed accountability around the use of AI in an organization. We've also discussed where people might be tempted to use AI because it can be something that drives efficiencies, but they use it irresponsibly, maybe not intentionally, but unintentionally, and 42001 will help to remove that concern by making people more accountable for how and when they deploy AI, and also what they use it for and where they supplement it with human intelligence in decision-making processes and applications.

So those are two really important parts of the standard that will help us get more trust in how we use AI.

Right, right.

I think thirdly, 42001 also requires us to really engage multiple stakeholders as part of the 42001 process. So this will give us more confidence around how we look at and deploy that management system within the organization, that it was designed in a way that represented the needs and interests of the business itself, as well as its clients, customers, regulators and employees.

Right. So let's say we have a customer now that's ready to consider getting certified to ISO 42001. What do you find are some of the common misconceptions about that implementation?

Yeah, good question, and I'm sure we're going to get this a lot. If I can just identify, so firstly, the biggest misconception is that it is only for developers, you know, those large big tech firms.

Right, right.

But as we've discussed already, it's just as relevant for the consumers and users of AI, small businesses, medium-sized businesses, other large businesses, that could be using AI tools and are basically clients in AI, applying it, training it, deploying it across their own systems and data to drive efficiency within their organization and enhance experience for their clients. Those businesses also need to be looking at 42001.

So that's misconception number one. It's not just for the developers and builders but also for the users.

Anybody can use it.

Correct, the whole supply chain really around AI. The second big one is that 42001 is expensive to implement. In reality, this is not a big expensive exercise like a lot of the other management system standards, 9001, 14001.

42001 implementation is also not prohibitively expensive, from a certification perspective, from a training perspective, or an internal application perspective. In fact, actually, it could end up saving you money. It will give you early wins in generating a standardized way of managing AI within the business.

And the return on investment by doing this versus actually having some medium-term failures, regulatory fines or blowback from customers means it’s something that should be considered seriously, particularly at this point in time where AI is scaling so fast, but this black box we discussed is still very much an open question that needs to be addressed.

Yeah. I think back to 27001 where it's one of those things where you don't know what you don't know and where your dangers are. This, you're literally, at least at that point in 27001 and dealing with information security, you're trying to keep somebody out or you're trying to keep something from leaking out. Here, you could potentially be handing it over, information that shouldn't be. So it's really, really important that you're making sure you're not putting yourself at that big of a risk where you do hit those fines and failures, anything like that.

So Kevin, do you find that 42001 is easy to implement into an IMS like others have been? We see a lot of the 9001, 14001 together, 9001, 45001, 9001, 27001, is 42001 pretty easy to set up as an IMS as well?

In principle, yes. The underlying structure of the framework is very similar to the other standards that you've just mentioned. So in principle, yes. What I would say is that 42001 is a little newer in the market. So the specifics that apply to AI are things that, there are not necessarily as many auditors, certifiers, or qualified experts available to support the embedding of or the implementation of.

But in principle, yes. If you've got the right people to support you through that process, then 100%.

  1. You can do all the due diligence, but we've got to find somebody, someone who can actually certify you to it right now, that's really a key point there.

So Kevin, what are you looking at when businesses have said, OK, we're considering 42001, where do we start?

Very important question. Firstly, get hold of the standard, read it, learn about it, and think about it. As part of that information gathering, by the way, you may even go to an AI generative tool, ChatGPT, Perplexity, Claude, and ask it the questions: What’s in 42001? Is it relevant for my business? So do some of that initial discovery work for sure. That’s absolutely critical, learn about it and its application to your business. I think with that, you start to think about what kind of business advantage we could get by deploying this, because there should be a business case ready for the deployment.

Secondly, I would say training. Training is a great place to start with these sorts of things, new standards, new tools, going through a structured way of looking at this with people that know and understand it and have the technical expertise to support that. So an Introduction to 42001 training, auditor or lead auditor training, depending on the size of your business, actually, you may even be in that internal audit function doing this within your own organization. So training is a really important step here. It brings a good structure, logic and a sort of systems thinking into the potential rollout of this within your organization.

Then I would say thirdly is the option for doing a more comprehensive or light-touch AI risk assessment, to understand where and what sorts of exposures you may have in your organization today, whether it's building your own models or whether it's actually using third-party AI tools. Getting some of that insight and thinking about the risks, not just to your own business but also to your customers and other key stakeholders, that will help you think about your 42001 deployment and roadmap a little bit more effectively and sequentially.

Then I would say obviously if you're going to move forward to thinking about a certification, this is a point where you need to start getting the right level of executive sponsorship in the business, by putting together that business case for 42001 as a strategic enabler of both innovation, risk management, competitive advantage, etc., and not just a compliance checkbox. There should be a return on investment linked to the rollout of the standard.

And then you could either, depending on the size of your business, roll it out for either a high-impact but manageable use case within your organization or actually it may be that your business is one use case and you're deploying it for the whole organization. So it just really depends on the scale and size of the business. But those would be the kind of five key steps I would say for starting.

Even if you just do step one and two, which is really learning about it, understanding it and then going through the training and standardizing your thinking about it, so that you can move into the risk assessment and governance, that would be a really great way to start.

So real big final question here before I get your final thoughts. Why does this matter now? We talked about it a little bit. It's coming at us hard and fast, best to get on top of it, but why does 42001 matter now?

From my viewpoint, we're at a tipping point. A tipping point for humanity around the transition to AI and the use of it, but also a tipping point for AI itself and how it's going to be deployed by businesses, by people in different sectors and in different countries and regions.

There’s so many new tools being developed all the time, every day, there’s so much to keep up with. Increasingly, we’re at a point where effective governance of AI is what’s going to separate the winners from the casualties or the losers. There’s so many new models, so many new updates, so much competition, actually, what becomes more and more relevant is how well you as a business are using and deploying it, governing it, and communicating that, internally and to your stakeholders. Ignoring that governance puts you at risk of a really potentially catastrophic failure that could have a huge reputational or monetary impact on your business, destroying brand equity and market position.

So that’s point one, governance is more relevant now than ever before. Point two, if we don’t get this right, ethical debt. This is a real thing that we’ll start to talk about moving forward. And it’s nowhere more relevant than in the deployment of AI. The models and tools and how businesses deploy AI could lead to ethical debt, not just tech debt, which is a term that a lot of people are very familiar with.

That would be very hard for companies to dig themselves out of if they get inadvertently in a hole that’s too deep. And this is very possible with AI because things are scaling so quickly, information, communication, scaling so fast that we need to make sure we never incur any ethical debt and we stay in front of this dilemma from the beginning. Now more than ever, 42001 is relevant.

It’s not just about avoiding risk and avoiding ethical debt and getting in front of the governance landscape, but it’s also about being able to effectively, safely seize the massive opportunities and be at the front end of the AI revolution, in a responsible and efficient way that supports your deep long-term trust building with your clients and regulators.

Well said. This has been really great. I can't tell you how much I've enjoyed this, talking about AI and 42001. Any last thoughts, Kevin?

Maybe a very short but hopefully provocative thought. We've spoken a lot about how AI is ubiquitous, everywhere and inevitable. What we haven't really acknowledged is that AI is not just there everywhere and being used, but it's also shaping our reality itself.

When we use AI, ask it questions and it answers them, and we believe them. When we make business decisions based on AI insights, it is shaping the reality that we perceive and that we experience every day. Now in the context where AI is actually shaping and co-creating this reality with us, don’t we want to know that it is being effectively governed when it’s doing that?

So governance isn’t a bureaucratic hurdle, it’s ultimately a central part of co-creating a future together with AI. We’re doing that already, by the way, and that allows us to move from what is a very risky experiment, a genie out of the bottle, into a space where we can use AI to generate sustained competitive advantage for our business and for our clients.

This has been a really eye-opening look at AI and 42001. And really, really a great episode for everybody to listen to. Thanks for being here.

Thanks for having me, X. Really enjoyed it very much and appreciate your time.

And thanks to all of our listeners for tuning in. At LRQA, we help organizations adopt AI and technology with confidence. Our services include ISO 42001 training, gap analysis and certification, helping you demonstrate best practices in managing AI risks and embedding ethical, responsible AI across your operations. We also offer solutions that extend into cybersecurity and information security through ISO 27001 and ISO 27701, ensuring a comprehensive approach to your digital transformation journey. Thanks for listening today.