A Pathway to an Actionable AI Governance Roadmap

Meghan Maneval
Author: ISACA Now
Date Published: 8 May 2024

Editor’s note: ISACA is introducing new training courses on artificial intelligence, including a course on AI governance. Meghan Maneval, an AI expert who contributed to the AI Governance: Principles, Strategies and Business Alignment course, recently visited with the ISACA Now blog to share her perspective on AI governance and some of the main components of the course. See the interview with Maneval below, and find out more about ISACA’s new AI courses on governance, AI essentials and auditing generative AI here.

ISACA Now: What interests you most about AI governance?

My interest in artificial intelligence began many years ago when I started exploring the use of process automation in cybersecurity. Like many, what began with triggers and workflows, has grown into myriad generative intelligence and machine learning opportunities across most aspects of life. As AI usage spread throughout organizations, the panic associated with new technology set in. But as a self-proclaimed “risk optimist,” I instead started looking for the overlap between existing frameworks and the unique threats posed by AI systems. 

Right now, there isn’t a lot of guidance for AI governance. And that’s what interests me the most – the opportunity to define it! It’s a whole new world and we’re learning every day about new possibilities and new threats. As an industry we get the opportunity to build the guiding principles for the future of ethical, trustworthy, and secure AI systems. And that sounds pretty awesome to me!

ISACA Now: How do you see AI governance intersecting most with professionals in fields such as audit, risk, privacy and security?

The most important thing to keep in mind about AI governance is that it is not a standalone process. To be successful, it must be embedded into the organization’s greater security and compliance ecosystem. Specifically, including AI risk management as part of the software development, project management and change management processes ensures risk is assessed early and often, and mitigated to an acceptable level. 

From a privacy and data governance perspective, ensuring proper asset, data and model tracking provides a holistic view of AI usage. Further, ensuring your third-party risk management process can track which vendors use AI and how they are securing it provides greater risk oversight. In addition, adding AI controls to the internal audit calendar and including updates as part of board meetings or management review sessions ensures everyone remains informed and adjustments can be made to the AI governance strategy, as needed, in a timely manner. In short, AI governance intersects with EVERY aspect of audit, risk, privacy and security. AI governance is important for all branches of GRC. 

ISACA Now: What might the consequences look like for organizations that do not prioritize governance around AI?

Companies that don’t embrace AI governance may be able to get by for a little while, but it’s only a matter of time before we start seeing AI-related breaches, which in turn will lead to mandatory regulations. My prediction is that it will likely be a large organization breached by a third-party vendor, similar to Home Depot's infamous HVAC system breach. There isn’t enough transparency around how AI is being used and many organizations are unknowingly taking on AI risk through their vendors. If an organization isn’t prioritizing third-party AI assessments, they could fall victim to third-party breach. 

Additionally, organizations may also start losing business for not prioritizing AI governance. The more we learn about AI, the more important it becomes to understand what it does, what it accesses and how it is secured. Organizations that don’t implement these processes will be unable to demonstrate trustworthy AI usage to their customers.

ISACA Now: What is an aspect of ISACA’s new course on this topic that you think learners will find especially valuable?

The course offers a deep dive into the principles of AI governance, including designing, developing, implementing and monitoring secure and trustworthy AI systems. The course also explores the benefits and challenges of implementing AI governance, providing best practices for communicating AI risk to stakeholders. But what learners will find particularly valuable is the four-step actionable AI governance roadmap. Upon completion, they’ll have the tools and expertise to begin deploying a strong AI governance program at their organization.

ISACA Now: How will AI governance knowledge help individuals stand out in their enterprises or with a future employer?

AI governance is essentially a new field of study. Even though there is some overlap with existing GRC principles, understanding the nuances, challenges and benefits specific to AI governance will set individuals apart as an AI subject matter expert. Given the newness of these concepts, learners will be on the forefront of AI governance and be in high demand for companies serious about AI usage. Learners will become one of the few masters of AI governance – a skill they can take to any future role or organization.

Additional resources