How boards provide steady guidance amid AI transformation

 

Directors are tackling key oversight issues as AI shifts  

 

As company leaders work to harness the awesome power of AI, veteran board members, technology specialists and Grant Thornton professionals urged directors to be vigilant, carefully consider key risks and opportunities, and remember their duties as board members.

 

Anna Catalano, who serves on multiple boards and advises CEOs and boards on leadership, spoke during a Grant Thornton presentation at the recent National Association of Corporate Directors Summit. She said board members need to consider three key points:

  • Who do you want to be as a company?
  • Don’t confuse productivity with judgment.
  • Be true to who you are.

For the second time in two years, Grant Thornton’s NACD Summit presentation reinforced the firm’s consistent message related to rapidly emerging AI technology. Organizational leaders need to embrace opportunities to innovate and drive business improvement while carefully managing risks, measuring return on investment, and adhering to the precepts beyond profit that their organization holds dear.

Anna Catalano

“Human judgment has never been more important. This judgment is needed to test whether the direction AI is taking us reflects the values of our organization.”

Anna Catalano

Board member and leadership advisor

 

And above all, while AI holds tremendous promise, people are required to make responsible decisions about adoption and use.

 

“Human judgment has never been more important,” Catalano said. “This judgment is needed to test whether the direction AI is taking us reflects the values of an organization and reflects the values of our society.”

 

During the presentation, panelists described emerging considerations for boards related to AI, including:

  • New trends in committee structures
  • Responsibilities for encouraging slow-adopting executives
  • Handling numerous AI funding requests
  • Considering both downside risks and upside opportunities
  • Preserving organizational culture
  • Addressing quality considerations

That’s a lot for boards to consider, but organizations and communities can thrive when directors take these issues to heart.

 
 

Governance structures evolve

 
 

Boards are responsible for understanding strategy and risk, including AI adoption at their organizations. Helpful questions can include:

  • Is there enough AI knowledge in the boardroom to provide appropriate oversight?
  • Is a shift in board committee structures warranted as AI emerges?
  • From the C-suite to the lowest rungs of the workforce, are people being encouraged and enabled to take advantage of this technology?
  • How is management identifying which AI projects will receive funding?

Before boards can provide appropriate oversight, they need to be educated on the underlying technology. AI’s quick rise in popularity means that there are plenty of opportunities for board directors to get insights on proper AI oversight through the NACD and other channels. Adding directors who have deep AI experience to the board also can provide a tremendous benefit to governance.

 
 

In recent years, boards have made considerable progress in adding directors with technology specializations to their rosters, due in large part to oversight requirements related to cybersecurity concerns. This same risk-based approach will be required for AI oversight, but boards should also consider whether to add directors with AI-specific skills to their ranks.  

 

The question of the board’s competence in AI oversight is the reason that some boards are considering updating their structures with the addition of a technology-focused committee. Some boards already have innovation and technology committees, and the number of boards in the Fortune 100 with standalone technology committees increased from 7 in 2012 to 36 in 2022, according to the NACD.

 

In recognition of this trend, panelists said board members should lean in to closely examine an organization’s management-level foundation for AI efforts. Because board members are removed from day-to-day operations, they can serve as a check on in-house AI councils that might miss opportunities for standardization.

 

For organizations that don’t have management-level AI councils, boards can explain the importance of establishing such a cross-functional group to foster best practices, encourage frequent and candid communication, and promote harmonious use of AI across the enterprise.

Janet Malzone

“By ensuring that a strong foundation is built now, boards can pave the way for strong AI performance that’s sustainable — with properly managed risks.”

Janet L. Malzone

CEO, Grant Thornton LLP
Principal, Grant Thornton Advisors LLC

 

“Boards need to make sure management’s AI initiatives are grounded appropriately and have flexibility within their structures to accommodate changes that might occur through M&A or other shifts in the environment,” said Grant Thornton LLP CEO Janet Malzone. “By ensuring that a strong foundation is built now, boards can pave the way for strong AI performance that’s sustainable — with properly managed risks.”  

 

To assist in evaluating management’s AI infrastructure, boards increasingly are welcoming professionals with deep knowledge of AI to their ranks.

 

Nearly one-fourth (23%) of respondents to an NACD Summit audience survey said they have dedicated AI experts on their boards, a huge increase from 4% the previous year. 

 

Related resources

 
 
 
 

Getting executives and employees aligned

 
 

The people-oriented considerations for appropriate AI governance start with the highest levels of management. Boards need to ascertain whether executives — including the CEO — appropriately support an environment that will help the organization capitalize on AI benefits.

 

If the CEO doesn’t understand those benefits, Catalano said, “you might have the wrong CEO.” If a good CEO understands the importance of AI but just isn’t knowledgeable about it, a supportive board (coupled with in-house or outside specialists) can get them up to speed.

 

“It’s a critical part of the success of their business, and they need to understand what’s happening,” Catalano said. “A board can help by suggesting benchmarking with peers in the industry and by putting the CEO in touch with other CEOs who have a better understanding of AI.”

 
 

For employees throughout the organization, trends revealed in Grant Thornton’s CFO survey for the third quarter of 2024 show the potential for a training deficit related to AI. While 90% of CFOs said their company is currently using generative AI or exploring potential uses for the technology, just 50% have formal training in place on the use of these technologies — a decrease of eight percentage points from the survey for the previous quarter.

 

Although the portion of board members taking an active role in understanding generative AI governance rose 11 percentage points to 54%, the deficit between organizations that plan to use the technology and those who provide training in this area is concerning.

 

“While we’re investing in this technology, we’re also cutting back on the investment in our people to use that technology to increase productivity and also to manage the associated risks,” said Grant Thornton Growth Advisory Services Managing Principal Joe Ranzau. “There’s a disconnect there.”

 

Panelists suggested that boards encourage leaders to take at least some of the savings associated with AI and invest those funds into worker training. When managed appropriately, this can create a success loop where AI savings fuel AI training, which in turn drives more AI savings. 

 

Boards have opportunities through their oversight of human resources to promote more effective and productive people management in many ways. Many HR leaders say they’re implementing AI use for tasks such as writing performance reviews and screening candidates — although the risks of bias need to be monitored carefully in candidate screening.

Joe Ranzau

“AI can actually be the thing that sees our people’s competencies and helps us see them.”

Joe Ranzau

Managing Director, Growth Advisory Services
Grant Thornton Advisors LLC

 

Ranzau also said some HR leaders are using AI to scour their existing workforce for capabilities that might help them in other areas. He said declining birth rate trends in the U.S. will create a need for greater productivity and innovative sourcing in the coming years, and AI’s ability to discover current employees’ hidden skills can help companies fill jobs with internal hires. This reduces costs and productivity losses associated with recruiting new employees, which is a significant expense with a long lead time required to reach full productivity.

 

“Some people are just afraid to raise their hand [for new opportunities], and some people don’t recognize the abilities they have,” Ranzau said. “AI can actually be the thing that sees our people’s competencies and helps us see them.” 

 
 

Funding for AI optimization

 
 

One question that boards might ask, according to the panelists, is where investments could be made to optimize AI. Investment in AI is essential to fund objectives related to:

  • People and culture: AI technology can’t be optimized if people don’t know how to use it and aren’t engaged with it. Boards might ask what investment is needed to enable the training that will help people use AI effectively.
  • Goals and ethics: Aligning AI with organizational objectives and regulatory and ethical boundaries — including data privacy and security requirements — helps optimize the technology’s effectiveness. It is important that those areas are funded appropriately.
  • Data and systems: Boards need to ask how an organization is investing to prepare its systems and change its architecture to enable new capabilities, work seamlessly across functions, and integrate new systems when M&A occurs.

Balancing the funding for all these requirements is difficult, and no two organizations will be exactly alike in how they navigate this challenge. Boards should consider these factors as part of the organization’s long-term capital allocation plan, and Catalano suggested that boards should take a two-bucketed approach related to these investments.

 

She said AI projects that are meant to make incremental improvements to the business can be treated through traditional budgeting approaches that exist for the basic architecture of the organization. For AI projects with the potential to transform the entire company or disrupt the industry, capital allocation should be treated similarly to research and development resourcing, Catalano said.

 

“Relative to innovation, if you’re looking at capabilities that can transform or disrupt your business model, I would treat that the way we look at R&D,” Catalano said. “Because you’re not going to get immediate benefits from it. But not investing in these capabilities is a huge risk.”

 

Panelist Margot Carter, President and Founder of Living Mountain Capital LLC and a member of the boards for three companies listed on the New York Stock Exchange, suggested that, if the futuristic and transformative use cases seem out of reach for average companies, they can start on their AI path by getting familiar with AI in low-risk, high-reward use cases.

 

Carter, who also co-founded the AI-powered data analytics company Cien, said sales, marketing and finance functions often provide fertile ground for identifying “messy sales data” and other business problems that can be addressed effectively with AI. In sales and marketing, AI can enable deeper research and a comprehensive examination of the production of sales representatives, based on comparisons of total sales with the quality of leads, uncovering previously undiscovered patterns of behavior. In finance, AI tools can be used to detect fraud, overpayment and duplication. Carter said boards that approve funding for these types of projects help their organizations become more comfortable with AI, which could lead to potential transformative AI adoption in the future.

Margot Carter

“You can get a tool off the shelf that’s tested and verified, so you don’t have to spend a fortune on it.”

Margot Carter

Living Mountain Capital LLC President and Co-Founder at Cien

 

“You can get a tool off the shelf that’s tested and verified, so you don’t have to spend a fortune on it,” Carter said. “You spend a minimum amount of money and see if it can work. Hopefully you will find opportunities for low-cost investments that yield high returns with low risk.”

 

Carter suggested that directors or management should ask about the data security and potential biases related to the tool as part of the company’s overall strategy and risk management activities. 

 
 

Manage risks — downside and upside

 
 

Grant Thornton Risk Advisory Services Principal Ethan Rojhani said boards need to ensure that management is thoughtfully ranking the risks related to each individual AI use case — for both upside and downside risks — based on the impact they could make on the organization. Consider, for example, the AI solutions being created in human resources compared with those implemented in advertising.

 

Ethan Rojhani

“If you’re using AI for hiring and a candidate can prove the algorithm was biased, you may have just broken the law.”

Ethan Rojhani

Principal, Risk Advisory Services
Grant Thornton Advisors LLC

“There’s a different level of risk from the advertising side,” Rojhani said. “If you lose a customer, that’s a problem. But if you’re advertising to 1 million customers, losing one might not be such a big deal. Whereas, if you’re using AI for hiring and a candidate can prove the AI algorithm was biased, you may have just broken the law.”

 

The risks and opportunities can be very different with each AI use case, and boards can have a significant impact just by asking management how it is thinking about the risks in each use case.

 

Because of AI’s rapidly evolving technology and dependence on vast quantities of data, one of the most important risks related to AI is an omnipresent threat to data security and privacy. Grant Thornton Risk Advisory Services Principal Johnny Lee said the three legs to an AI data security stool are:

  • Preventive safeguards that address potential attacks and intrusions
  • Detective capabilities that monitor for signs that preventive controls have been surmounted
  • Insurance coverage to mitigate damage that can occur when preventive and detective controls are insufficient to the risk
Johnny Lee

“I don’t think there can be AI adoption on a meaningful scale without insurance.”

Johnny Lee

Principal, Risk Advisory Services
Grant Thornton Advisors LLC

“I don’t think there can be AI adoption on a meaningful scale without insurance,” Lee said.

 

AI tools also need to be monitored for hallucinations — incorrect outputs that are often caused by data errors or biases — as well as “drift” within AI models. Drift occurs when the use of AI causes subtle — and sometimes not-so-subtle — shifts in the models, which in turn can produce skewed outputs as the model “learns.”

 

“If you’re not detecting that drift and its relative impact on liability, transparency, bias and other factors, then you may not be testing the model in a way that would be deemed responsible,” Lee said.

 

The good news, however, is that technology exists that can detect data anomalies that contribute to drift, with the capability to eliminate anomalous data from the model or adjust the model to manage drift.

 

In their oversight role, boards might ask management how it is addressing these risks, what risk-management framework is being used, and whether third-party services are needed to supplement the internal resources devoted to these tasks. In addition to these downside risks, it’s also important for boards to consider the upside risks that can occur from not taking full advantage of AI.

 
 

Companies that don’t take advantage of incremental productivity enhancements offered by AI risk falling behind competitors who are more progressive with the technology. In some instances, organizations that don’t stay on top of AI capabilities risk seeing their whole business model become obsolete.

 

Panelists said substantial disruption is possible for:

  • Companies and organizations that bill by the hour: AI can drive tremendous productivity improvements that might demand billing for output rather than hours. Compensation will look different for firms that use technology to achieve exponentially more results in a shorter time.
  • Customer service call centers: Advanced AI chatbots in some cases have exceeded the customer satisfaction ratings of human operators. On a macroeconomic level, AI may pose immense risks, for example, to the economy of countries that depend heavily on call center and business process outsourcing jobs.
  • Pharmaceutical companies: Drug discovery can be much quicker when generative AI is used for deviation investigation.

In these cases and many others, companies that lag behind their competitors in AI adoption might pay a heavy price.

 

“As directors, understand what doors AI is going to open for you and what you can do to move quickly on things that matter to maintain a sustainable business,” Catalano said. “Because it will change fast.”

 

Sometimes, an outside perspective is useful in determining which AI opportunities make sense for an organization. Bringing in a third party to present possibilities to the board and the management team can unleash untapped potential, especially when brainstorming sessions are held immediately afterward.

 

“We all try to stay ahead of being disrupted,” Carter said. 

 
 

Guardians of culture

 
 
 

Perhaps the most important asset that boards need to protect from disruption is the organization’s culture. If AI damages the culture, the negative repercussions won’t be offset by improvements in productivity or functionality.

 

“The more technology plays a role in our work lives, the more important humanity becomes in everything we do,” Catalano said.

 

The best results are achieved when AI is managed by a responsible human being with the right subject matter expertise. Preserving the culture enables AI to deliver its full complement of benefits while the humans steer the technology clear of the risks to the organization — and society.

 

The coming years will be filled with enterprising employees coming to management with capital requests for AI initiatives, supported with details about incremental improvements that the technology will deliver for a given function. Directors need to be able to understand the impact of these initiatives — and all initiatives in aggregate —on the rest of the organization, the employees, the customers, the shareholders, and the community at large.

Kjell Carlsson

“You need to rationalize the complexity while still preserving the value-added insights.”.”

Kjell Carlsson, Ph.D

Domino Data Lab Head of AI Strategy

 

Panelist Kjell Carlsson, Head of AI Strategy for Domino Data Lab, said boards can be a steadying force in an emerging technology environment that is chaotic without proper oversight.

 

“You need to rationalize the complexity while still preserving the value-added insights,” he said. “That is a difficult thing to do, and it’s particularly difficult for you on the board to do this well. But you can shine a spotlight on it … You need to be driving.”

 

Catalano said that as companies discover ways to accomplish goals more quickly and cheaply through AI, it’s up to boards to make sure the uses of AI reflect the values the companies stand for.

 

“It’s really important that we don’t disown the importance of making the calls that need to be made,” Catalano said. “We have to, as leaders and board directors, make the calls, the judgments and the decisions.”

 

 

 

About the guest panelists

 

Kjell Carlsson is Head of AI Strategy for Domino Data Labs. He advises organizations on driving business impact with AI and data science and participates in keynotes, panels, workshops, consulting and research. He also hosts the Data Science Leaders podcast, where he interviews executives and industry observers about AI best practices.

 

Margot Carter is co-founder of Cien, a data analytics company that uses AI to improve sales analytics and transformation to drive meaningful value for clients. Cien serves clients by providing them with effective, secure and actionable ways to incorporate AI in their processes while increasing profitability and exit value. She also is a member of the boards of three companies listed on the New York Stock Exchange. She is chair emeritus of NACD North Texas.

 

Anna Catalano serves as an independent director for public and private corporations and not-for-profit organizations. She is a board member of HF Sinclair Corp., Frontdoor, Ecovyst and Hexion Inc. She also is a director of the NACD Corporate Directors Institute, is former chair of the NACD Texas TriCities Chapter and is a co-founder of the World Innovation Network.

 
 

Contacts:

 
 
 
 
 
Content disclaimer

This content provides information and comments on current issues and developments from Grant Thornton Advisors LLC and Grant Thornton LLP. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC and Grant Thornton LLP. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.

For additional information on topics covered in this content, contact a Grant Thornton professional.

Grant Thornton LLP and Grant Thornton Advisors LLC (and their respective subsidiary entities) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations and professional standards. Grant Thornton LLP is a licensed independent CPA firm that provides attest services to its clients, and Grant Thornton Advisors LLC and its subsidiary entities provide tax and business consulting services to their clients. Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

 

 

Tax professional standards statement

This content supports Grant Thornton Advisors LLC’s marketing of professional services and is not written tax advice directed at the particular facts and circumstances of any person. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. If you are interested in the topics presented herein, we encourage you to contact a Grant Thornton Advisors LLC tax professional. Nothing herein shall be construed as imposing a limitation on any person from disclosing the tax treatment or tax structure of any matter addressed herein.

The information contained herein is general in nature and is based on authorities that are subject to change. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. This material may not be applicable to, or suitable for, the reader’s specific circumstances or needs and may require consideration of tax and nontax factors not described herein. Contact a Grant Thornton Advisors LLC tax professional prior to taking any action based upon this information. Changes in tax laws or other factors could affect, on a prospective or retroactive basis, the information contained herein; Grant Thornton Advisors LLC assumes no obligation to inform the reader of any such changes. All references to “Section,” “Sec.,” or “§” refer to the Internal Revenue Code of 1986, as amended.

Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

 

Our audit insights