This content originally appeared in NACD Directorship Q4 2024 as "Navigating AI Governance: A Pragmatic Guide for Corporate Boards"
“While generative AI has shown us how quickly technology can evolve and be embraced, board members have been providing oversight over emerging risks for decades. The same foundational principles that have enabled responsible governance over other risks will help boards deliver effective oversight related to AI.”
The exponential growth and availability of artificial intelligence (AI) across every sector is compelling boards to recalibrate their roles in providing oversight on strategy and risk management. While disruptive, this new dynamic presents a mosaic of opportunities for growth and promises significant impact across all stakeholders — customers, communities, employees, and even shareholders. The key to unlocking strategic opportunities to drive value using AI, and to effectively managing the risks it presents, is recognizing that although AI is technology, how we address and use it is profoundly human.
Approaching AI from the perspective of your company’s mission and values can align strategic decisions with the interests of the people your organization serves. AI presents strategic opportunities, risks of adoption, ethical dilemmas, intellectual property concerns, and privacy challenges. Although there are legitimate concerns about AI replacing jobs that can be automated, AI also can elevate human roles, driving value and maintaining alignment with your organizational ethos. The adoption strategies, ethical considerations, risk management approaches and governance framework of AI should all be informed by these human-centered considerations.
The Wild West of AI regulation
Although some aspects of AI still are not regulated, organizations need to create structure around their AI implementation in anticipation of regulation becoming more comprehensive in the future The swift current of development makes it necessary for companies to establish their own guidelines and safeguards to maintain trust, without waiting for external regulation.
It’s important, of course, for boards to remain on top of emerging legislation and regulation as it is developed. Although Congress is still in the early stages of exploring AI-specific legislation, President Joe Biden signed an executive order in February directing federal agencies to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. In April, the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, and the Federal Trade Commission issued a joint statement describing their intention to use their existing legal authority to protect the public from harm related to AI.
The European Union (EU) may be the jurisdiction that’s furthest ahead in AI regulation, as in June it adopted the European Parliament’s negotiating position on its AI Act. Talks have begun with EU countries on the final form of the law, with the aim of reaching an agreement by the end of the year. As proposed, the rules would establish obligations for AI providers that are tiered depending on the level of risk they present. AI systems presenting an unacceptable risk would be banned. Others with less risk would be subject to certain registration or transparency requirements, depending on the level of risk.
China is developing a regulatory framework that creates rules for AI algorithms and AI-generated images and chatbots—while proposing rules for generative AI.
California, Colorado, and New York City are among the state and local government leaders in developing AI legislation and regulation. New York City has adopted final rules requiring employers that use AI in hiring to inform candidates that they are doing so and to undergo independent audits to show that their systems are not sexist or racist. Since 2021, Colorado has held insurance companies accountable for testing their big data systems, including AI-based systems, to verify that they are not unfairly discriminating against consumers in a protected class. And California legislators have proposed a host of new regulations designed to protect the public when AI is used to make important decisions in the marketplace.
Because these and other various regulatory requirements are evolving, boards should seek regular updates on their progress, clarify how the rules apply to their organizations, and determine what management is doing to facilitate compliance with the regulations. But regulatory developments should be just one factor in governance over AI. In an environment where the technology is advancing faster than regulation, boards must take a strong stance in overseeing their companies’ use of AI and establishing a responsible culture related to the technology that matches their organization’s ethical framework. While the pace of regulation may be deliberate, boards have an opportunity to be nimbler and act more quickly in the human interest in AI development.
Although the technology may be new and some of the warnings about AI may be alarming, providing governance in this situation isn’t new territory. Boards have navigated complex waters before, albeit under different circumstances. While ushering their organizations through each of the following crises, board members learned something profound:
- Y2K: As the approaching turn of the millennium brought fears of widespread system failures related to difficulty changing dates in computers, board members learned the importance of finding the talent and capacity to handle specific challenges that could pose existential threats to their organizations.
- Enron Corp. and WorldCom scandals: In response to the Sarbanes-Oxley Act of 2002, enacted in the aftermath of major corporate scandals and designed to strengthen corporate governance, board members learned to step up their oversight, assert their independence, and hold executives and management accountable for their actions and their organizations’ financial reporting.
- Global financial crisis: Confronted with the economic calamity that became known as the Great Recession, board members learned the importance of developing responsiveness and agility, with a financial resilience that may require exploration of alternative sources of financing.
- COVID-19 pandemic: As global business stopped and started with the ebbs and flows of a deadly pandemic, boards learned to act quickly, become more flexible, provide timely insight, and prioritize the health, safety and well-being of their people.
The perseverance of organizations through these challenges shows that boards have been tackling complicated disruptions for decades. The task now is to handle the disruption and opportunities associated with AI with the same wisdom, grit, and resilience.
Adoption is rapid
Thirty percent of respondents in Grant Thornton's chief financial officer (CFO) survey for the second quarter of 2023 report that their organizations are already using generative AI, and an additional 55% say they are exploring potential use cases for it.
Our governance insights
NACD SUMMIT
PODCAST
ARTICLE
ARTICLE
ARTICLE
Recognize opportunities, manage risks
The velocity of AI’s evolution magnifies the risk of organizations falling behind competitors, pressing them to respond swiftly. This haste can ensnare companies into a “ready, fire, aim” approach, springing into action before thorough planning or analysis. Conversely, a “wait and see” strategy poses equal but different risks.
In navigating these waters, strategic thinking about AI becomes paramount. AI can infiltrate every process within an organization, from customer service and marketing to human resources and compliance. This requires a conscientious commitment to exploring the complexities and concerns related to AI and identifying clear-cut goals for its use. To start, boards may wish to ask management:
- Does the proposed AI use have a positive intent that is in line with our mission and values?
- What are we doing to make our people aware of the legal risks and copyright issues related to AI-generated content?
- How do we stay up-to-date on the evolving regulatory landscape?
By making AI strategy and risk a board-level priority, a powerful message is sent to management, employees, and other stakeholders about the company’s intention to use AI in a thoughtful and reasonable manner.
Board oversight should align technology with mission, values
The use cases of AI are numerous and can be set into motion in virtually every function of the organization. Here are some examples:
General
- Editing and proofreading, including improving copy and tone
- Searching and summarizing documents (e.g., chat with PDF)
- Summarizing meeting notes and transcribing audio to text
- Data cleaning and formatting
- Data quality control
- Data analysis and visualization
- Automated reporting
- Automated data entry
- Generating templates (e.g., for emails, campaigns, outreach, contracts)
- Personalizing standardized communications and templates
- Market and competitor trend analysis
Marketing
- Idea generation (e.g., blog topics)
- Translating content to different languages (with human checks)
- Improving and refining search engine optimization
- Recommending ad formats and creative design for web pages, articles, campaigns and images
- Scheduling social media posts based on target audience behavior
- Identifying and analyzing top-performing ads and campaigns in the industry
- Analyzing customer and prospect data to inform segmentation
- Identifying customer pain points (via analysis of online reviews, social media mentions, etc.)
Sales
- Role-playing to practice pitches and proposals
- Developing standardized training tools for sales representatives
- Researching target industries and personas
- Building and updating comprehensive client profiles
- Drafting or refining emails and subject lines
- Script and outreach message generation, personalization and editing
Finance
- Investment analysis and prediction
- Forecasting and planning
- Identifying which variables are most important to financial models
- Conducting comparative analyses of company performance
Information Technology
- Building a knowledge base of frequently asked questions and troubleshooting guides
- Troubleshooting and resolving technical issues
- Building customer service and support bots (internal and external)
Risk
- Analyzing and summarizing legal and regulatory documents
- Assisting in drafting or reviewing legal- and compliance-related forms, filings, contracts, etc.
Boards and management can take six steps to proactively oversee the opportunities and risks of AI.
1. Understand best practices and engage with management. To provide effective oversight for management, boards need to first understand best practices for organizational leaders as they work to make the most of AI opportunities while managing the risk.
It’s important, too, for board leaders to understand that time is of the essence. During the COVID-19 pandemic, boards became adept at interacting with management in a timely fashion as they handled urgent challenges that were rapidly developing. While AI isn’t a crisis-level scenario like the pandemic was, it may demand attention on a more regular basis than the board can provide with feedback just from quarterly board meetings.
One emerging best practice for management is to align all uses of the technology throughout the organization under one central point of leadership. At small organizations that might be one individual or a small group of people, but many companies are discovering the benefits of developing either a management-level committee or a center of excellence to manage AI use throughout the enterprise.
Without a central AI management function, people will develop inconsistent practices that can increase the risks of data theft, cybersecurity breaches, and copyright infringement—and that are at odds with the organization’s principles. Consolidation of AI under one umbrella reduces those risks. For example, if three different business functions are using different vendors and platforms for generative AI purposes that perform similar work, the organization’s cybersecurity and risk exposure may be tripled unnecessarily.
Companies that use a management-level AI governance committee set the tone at the top for the use of AI, but they decentralize the execution of the AI plans. The committee develops a framework that describes the responsibilities of everyone involved in an AI project and creates procedures for decision-making, risk management, compliance and ethical alignment. The implementation of the AI strategy is then undertaken by representatives recruited from various company functions. The committee may recommend that functions that will be doing AI projects set up and use a project management office to support and oversee those projects. Alignment of operations with governance is maintained through regular audits and reviews designed to verify that AI use cases adhere to the policies and procedures that have been developed.
“Board members want to know that everyone from finance to operations to marketing is able to use AI to maximum benefit while adhering to the organization’s core values and principles of ethics and risk management. A center of excellence can provide the structure, strategic input and execution support to drive consistent, effective use of AI in lock step with the organization’s standards.”
Alternatively, management can create a single center of excellence for AI. There are many ways to build and operate a center of excellence, but in this approach the implementation of AI is centralized. In a typical center of excellence approach, an AI steering group of top executives and leaders from relevant departments sets the overall direction for AI initiatives and provides high-level oversight. The center of excellence may have one leader who is supported by program managers, ethics officers, compliance officers, and an AI technical team. Individuals in each business unit work with the center of excellence to develop AI initiatives in their respective units, and a risk and audit team conducts regular risk assessments to verify that AI initiatives meet standards and compliance requirements.
In either case, the board is responsible for providing oversight of the management-level governance undertaking. Board members review the policies and framework to make sure they are consistent with the company’s direction, and they verify that risks are being managed appropriately.
2. Prepare your people for the change. Regardless of which approach an organization uses, management should develop employees’ ability to use AI successfully.
Despite concerns that AI may eventually replace many jobs, AI implementation most often succeeds when people use the technology to perform tasks more efficiently and effectively and to act on data-backed insights that AI provides. As organizations deploy AI, upskilling provides the workforce the ability to effectively use the technology. Training, workshops and a culture of continuous learning help companies and their people get the most out of AI.
Use of AI is not a plug-and-play process. You wouldn’t send a new driver out into heavy traffic the first time they sit behind the wheel, and you can’t expect people to immediately have an innate understanding of how to make the best use of AI tools. Without training and guidance on how to prompt generative AI, for instance, employees may accept inferior or irrelevant AI outputs. If they reject those outputs, they might also turn against use of the technology itself.
Training a core team of “AI champions” who can cascade their knowledge to other trainers and ultimately to people throughout the organization may help encourage use, improve outputs, and in the future enable use of more complex AI tools.
Educating people on the appropriate risk management and cybersecurity considerations related to AI also helps build a culture that’s consistent with organizational values. Employers have been conducting mandatory training on individual and information technology cybersecurity best practices for many years. AI risk management can be addressed in a similar fashion, even as organizations also train employees on the use cases that can make them more effective in their roles.
Leading companies also will encourage their people to imagine innovative use cases for AI that will lead to improvement in their roles or at the organization. These use cases can be brought to management for approval — and funding, if necessary. At Grant Thornton, for example, techniques for using advanced analytics are proposed by employees and developed into customized solutions that reveal inefficiencies, test entire samples of data for anomalies, and may even detect fraud. Some organizations may hold contests that reward people for the most innovative ideas.
3. Balance innovation and risk management. To help guide strategic decisions around AI, organizations also can rely on their enterprise risk management (ERM) programs and functions. Even though AI is a novel technology, it merely represents a new form of risk and disruption that strong ERM programs should be equipped to manage.
The organization’s ERM program should help align its risk appetite with AI adoption strategies. While remaining stagnant may not be an option due to the pace of technological advancement, reckless adoption without thorough attention to ethics and strategy poses significant risks. Boards can pose the following questions to management:
- Are we using AI in ways that support our quality goals and our values?
- Who is charged with AI strategy and implementation? What is that person’s or team’s level of subject matter expertise?
- Is there a framework in place that aligns risk management with AI strategy, one that follows, for example, the National Institute of Standards and Technology or the Committee of Sponsoring Organizations of the Treadway Commission frameworks?
- If we aren’t using AI yet, what is holding us back? Are there reasons for staying on the sidelines?
These risks extend to internal team dynamics and external customer perceptions. Miscommunication about AI’s role may lead to anxiety among employees fearing job displacement. This concern can fuel negative attitudes toward AI, limiting its potential effectiveness. Communicating clearly with the workforce about AI implementation is critical for both retaining talent and maximizing the technology’s benefits.
Sixty-two percent of organizations have defined and monitor the risks associated with generative AI, according to Grant Thornton's CFO survey for the second quarter of 2023. Fifty-two percent have clearly defined acceptable use policies, 49% have formal training on the use of AI, and 44% have boards that have taken an active role in understanding AI.
The strategy developed through the ERM program, alongside board oversight, can facilitate this communication internally and externally. This strategy can also guide ethical and effective data use. With rapidly changing regulations around data privacy and cybersecurity, it is critical to understand exactly how and where data is used.
Seizing AI’s potential while upholding an organization’s core principles and mission is the ultimate goal. Starting with small-scale, low-risk AI projects in line with the organization’s maturity can help leadership understand the opportunities and challenges, which can be addressed in subsequent larger-scale projects.
4. Protect your reputation against the double-edged sword of AI. Strategically implemented AI with a plan that appropriately addresses risks can fortify an organization’s reputation by enhancing its employees’ productivity and therefore improving products, services and outputs, demonstrating its commitment to societal values. Some experts believe that generative AI could substantially raise global gross domestic product by driving increased productivity.
AI can also introduce reputational risks around job displacement and other areas. Companies that rely heavily on AI, instead of augmenting human performance, risk damaging their reputation by cultivating a workforce riddled with anxiety.
However, AI can also provide new opportunities for employees to add value, allowing them to work with cutting-edge technology and advance in their careers. The key is to encourage them to see AI as a tool for enhancing their work and even discovering new use cases that could bring strategic advantages and cost savings. Then, ensure that the organization has put in place the opportunities and guardrails for employees to embrace AI and provide additional value.
Using AI ethically and responsibly is paramount. We have all read about the AI job application-sorting programs that once screened applicants to match past hires, inadvertently ruling out talented people from diverse backgrounds. Bias, lack of transparency into algorithms, and misuse of personal data are potential pitfalls that can cause harm to the humanity that boards are trying to protect — and in turn can damage a company’s reputation and financial health.
Boards may wish to ask and discuss the following questions:
- What steps are we taking to offer training to employees whose jobs may change or be eliminated by AI?
- Do we have an acceptable use policy to ensure consistent and responsible use of AI across the company?
- How are we ensuring that humans are reviewing content generated by AI?
- How have we incorporated ethics in the development of AI?
Those in governance roles must verify that the management team has put policies in place that encourage innovation while ensuring employees using AI understand the inputs the software uses, preventing unintended consequences.
5. Manage cybersecurity and privacy risks. Because AI is implemented at the intersection of technology, operations, and strategy, the cybersecurity and data privacy implications related to its adoption are immense. Any AI strategy should include careful risk mitigation for cybersecurity and data privacy.
When organizations implement AI technologies, they expose themselves to five primary risks:
- Data breaches and misuse. Because AI platforms process and store huge quantities of personally identifiable information and other sensitive data, they are particularly vulnerable to internal misuse and external attacks. Third-party AI platforms that are not properly integrated and monitored can expose sensitive data to nefarious actors. Meanwhile, the sudden popularity of generative AI may lead employees who are eager to experiment with AI platforms to inadvertently expose data that should be protected.
- Adversarial attacks. These attacks manipulate input data to cause errors or misclassification, leading the decision-making process of AI systems to make faulty judgments. These errant decisions could include the divulging of sensitive information or performance of unauthorized actions.
- Malware and ransomware. These threats have existed for years, and AI systems are not immune to them. One risk for systems that rely on AI is the encryption of the very data that AI relies upon, which could prevent legitimate access and cause disruption of services.
- Vulnerabilities in AI infrastructure. These threats are not unique to AI, as any software can be compromised by attackers, with risks including denial of service, unauthorized access to sensitive data, or entry into an organization’s internal network.
- Model poisoning. While AI is in development, attackers may poison the training data with malicious data, adversely influencing the AI’s output and causing the software’s behavior to deviate from its intended purposes.
Boards can play an important role in helping organizations steer clear of these threats. Board members may wish to verify that management is engaging in security and privacy practices that can include the following:
- Reviewing policies and procedures to define security requirements and oversight specific to AI
- Performing threat-modeling exercises that can identify measures for mitigating risks
- Exercising effective governance over data by verifying that data is properly classified, protected, and managed
- Controlling access to AI infrastructure, including data and models, with authentication and authorization mechanisms
- Encrypting data to protect the confidentiality and integrity of AI training data, source code and models
- Enhancing security for end points such as laptops, workstations, and mobile devices
- Overseeing vulnerability management, including periodic penetration testing on the AI software
- Maintaining awareness of security and compliance issues, namely:
- Privacy and data protection regulations that apply to AI use
- Ethical implications of AI technologies
- Legal and regulatory compliance requirements affected by use of AI, including those related to intellectual property, liability and accountability
6. Manage intellectual property and third-party risks. During a Grant Thornton forum featuring news industry executives in April, NBCUniversal Media CFO Anand Kini pointedly addressed the intellectual property risks related to AI:
“One of my colleagues just said, ‘Let’s be careful: AI can be another word for plagiarism.”
Generative AI and natural language processing models have shown an impressive ability to take a query and turn it into content that reads, looks, and sounds like original work. But unless the AI platform is producing content from a closed source of data that’s wholly owned by the entity using the technology, there’s at least some risk of running afoul of intellectual property or copyright laws when using AI to produce content.
Here, board members may need to know the difference between public and private large language models (LLMs) as they exercise governance. Public LLMs such as ChatGPT produce content based on information that’s publicly available and may bring heightened risks of users violating intellectual property laws. Those who use these platforms also run the risk that their proprietary material or data will be incorporated into that language platform and exposed to future users.
Private LLMs can be trained only on the closed set of data that is provided to them. Using a private LLM with access only to data your organization controls can reduce risks, but it also limits the universe of knowledge that’s available to the AI platform.
When third parties are supplying AI tools, board members need to provide effective oversight. At a time when outsourcing has become common, board members are already quite familiar with handling third-party risks and should understand their responsibilities related to the organization’s use of third-party AI platforms and vendors to supply the technology. Board members need to understand how third-party AI tools are used within the organization and the inputs that drive their machine learning models. These third-party platforms can also introduce data privacy and cyber risks. Tools that operate in a way that is inconsistent with the organization’s culture or ethics should not be used.
Additional AI insights
ARTICLE
ARTICLE
ARTICLE
Guarding the AI frontier: The board’s role
The role of the board in navigating the AI landscape is to provide oversight and guidance. This role is rooted in the same governance principles and ethics that have guided boards through past challenges, only now with a new twist. Enabling a variety of viewpoints to be heard, strong communication, and alignment of expectations with the CEO’s scorecard are just a few of the strategies that can be employed to ensure a smooth transition into the AI era.
To provide appropriate oversight over management’s use of AI, boards need expertise on the technology and the risks and opportunities associated with it. Finding this expertise may be difficult, but many organizations recently underwent a similar exercise in recruiting and developing topic-specific knowledge on cybersecurity. The same tactics may bear fruit in locating AI expertise.
As governance over this area continues to evolve, it’s important to remember that although AI is a technology, addressing it is profoundly human. AI affects human lives, from customers to employees to shareholders. Organizations in every industry across the globe are considering how AI can be used to augment human capabilities, increase productivity, and create a positive impact.
Here are some questions board members can ask themselves related to AI:
- Do we understand the AI initiatives of the company?
- Does the organization have the necessary internal or external AI expertise?
- Are we aware of the risks and potential benefits of AI?
- Do we have a clear role in AI policy, risk management and strategy?
- How do we ensure that AI use is aligned with the organization’s ethics?
The swift progress of AI demands a proactive, human-centered approach to governance. Boards play a crucial role in ensuring that AI is used ethically, productively, and in alignment with their organizations’ core values. As boards continue to traverse the AI landscape, they may wish to remember that the journey is as important as the destination.
Grant Thornton is a NACD strategic content partner, providing directors with critical and timely information, and perspectives. Grant Thornton is a financial supporter of the NACD.
Content disclaimer
This content provides information and comments on current issues and developments from Grant Thornton Advisors LLC and Grant Thornton LLP. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC and Grant Thornton LLP. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.
For additional information on topics covered in this content, contact a Grant Thornton professional.
Grant Thornton LLP and Grant Thornton Advisors LLC (and their respective subsidiary entities) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations and professional standards. Grant Thornton LLP is a licensed independent CPA firm that provides attest services to its clients, and Grant Thornton Advisors LLC and its subsidiary entities provide tax and business consulting services to their clients. Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.
Our fresh thinking
No Results Found. Please search again using different keywords and/or filters.