AI regulatory landscape and the need for board governance

 

Boards need to take action now on AI

 

Miriam Vogel, Chair of the National AI Advisory Committee (NAIAC), recently highlighted that the pace of artificial intelligence (AI) adoption is set to "exponentially increase, at warp speed." Board members face the formidable challenge of managing AI-related risks in a rapidly shifting regulatory landscape.

 

During the 2023 National Association of Corporate Directors (NACD) Summit, Vogel emphasized the need for directors to adeptly navigate this transformative technology. Organizations must responsibly harness the potential of AI and take advantage while making sure they don't inadvertently harm stakeholders.

Miriam Vogel

“The interplay between AI and cybersecurity is crucial. It’s imperative to ensure AI bolsters cyber safety rather than becoming a conduit for intrusions. It can do both.”

Miriam Vogel

Chair, National AI Advisory Committee

 

Vogel, the president and CEO of EqualAI, an organization that focuses on educating and implementing responsible AI governance, chairs the NAIAC, which advises the President and the White House on AI policy. While regulation is evolving, boards need to take action now. “We all need to have a plan in place, and we need to be thinking about how are you using it and whether it is safe.”

 

She underscored the urgency, noting that journalists are investigating where AI has gone wrong and where it’s discriminating against people. Additionally, there are lawyers who seize potential litigation opportunities against ill-prepared, deep-pocketed organizations. "Good AI hygiene is non-negotiable today, and you must have good oversight and best practices in place," she asserted.

 

Despite a lack of comprehensive Congressional AI legislation, Vogel clarified that AI is not without oversight. Four federal agencies recently committed to ensuring fairness in emerging AI systems. In a recent statement, agency leaders committed to using their enforcement powers if AI perpetuates unlawful bias or discrimination.

 

AI regulatory bills have been proposed by over 30 state legislatures, and the international community is also ramping up efforts. Vogel cited the European Union's AI Act as the AI equivalent of the GDPR bill, which established strict data privacy regulations affecting companies worldwide.

 

 

 

Why AI governance matters

 

In this environment, Vogel said there are four reasons that senior executives and board members need to care about proper governance over and development of AI:

“We need to verify that we do not have an implicit bias toward AI—that it is based on mathematical algorithms and thus accurate. Bias and other risks can imbed throughout the AI life cycle.”

Miriam Vogel

Chair, National AI Advisory Committee

  1. Employee satisfaction: Many companies are struggling to find highly skilled people now, and Vogel said employees don’t want to work for an organization that uses AI programs that cause harm, perpetuate bias and discrimination, or are simply ineffective. On the other hand, employees do like being part of a company that uses the latest technology ethically and effectively.
  2. Brand integrity: An organization that makes good use of AI without causing harm builds trust with its customers and community through its deployment of technology.
  3. Customer expansion: Companies that use AI in the right ways can broaden their customer base and deepen their relationships with existing customers without harming their brand.
  4. Liability and litigation: Without targeted AI legislation on the books, judges and juries over the next several years will decide and set precedents that determine when use of the technology infringes on copyrights or discriminates against protected groups. That’s a cause for caution and careful organizational policymaking by senior management under the oversight of the board.

“No matter what industry you’re in, if you’re using AI, you need to be implementing best practices, even though it’s a tricky time where our national and international standards are not clearly outlined,” Vogel said.

 

 

 

Practice good AI hygiene

 

A lot of good AI practices are an offshoot of what’s generally known as good corporate culture, Vogel said.

 

“A significant element of responsible AI comes from making sure that people know what is acceptable and what is not acceptable. Make sure your staff knows that it is safe to raise a concern if they encounter a problem and make sure they know who to raise it to,” she said.

 

She encouraged board members to ensure that trust is built internally in their organizations as well as with their customers. Vogel shared five best practices for good AI hygiene:

Key insight: Create diversity in AI systems

Each human touchpoint in an AI system carries the risk of harms, intrusion and discrimination, Vogel said. But each point also is an opportunity for leadership that’s using AI to correct the potential problems and embed diversity.

 

“Are your developers able to think beyond their own experience?” Vogel asked. “Make sure you have diversity in who’s developing the systems.”

  1. Ensure that your AI use reflects your corporate values: Vogel suggested that companies use a framework to keep their AI use in line with their values. Several frameworks are available, but she recommended the National Institute of Standards and Technology (NIST) AI risk management framework as a valuable tool. Additional tools recommended by Vogel include EqualAI’s An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework and the EqualAI Algorithmic Impact Assessment tool, based on the NIST framework.
  2. Establish accountability in the C-suite: AI accountability can’t be pushed down to the IT function. “This is front line. This is top line. It will have a cost. It will have liability. It will be fast-moving. So someone in the C-suite needs to be the final stop on these decisions,” Vogel said.
  3. Communicate your AI frameworks and processes: Frontline workers and supervisors need to understand the framework that’s being used for AI governance and the processes that need to be used to align AI use with the framework.
  4. Document processes and how they are followed: Processes need to be documented for easy review by the people who need to follow through on them. And users of AI throughout the organization should be required to document how they are following the processes.
  5. Audit continuously: “AI will continue to iterate and create new patterns, new problems and new recommendations,” Vogel said. “Make sure you’re consistently monitoring on a routine cadence so you’re continually aware of whether it’s providing the results you intended.”

 

 

Get everybody engaged

 

Vogel's passion for AI spans her roles as a board member, executive, citizen and consumer. She emphasized collective responsibility in AI's ethical deployment, advocating for collaborations with legal experts, social scientists and academia.

 

Grant Thornton Chief Strategy Officer Chris Smith, who moderated Vogel's fireside chat presentation at the NACD Summit, acknowledged the burgeoning board-academia partnership. He remarked, "There aren’t a lot of experts who can scale across every private and public board, so increasingly I’m seeing academia bridge expertise gaps in boardrooms, promoting innovative governance solutions."

 

The vast majority of board members who are performing oversight over AI are not computer scientists, but that’s OK. Smith said the technological breakthroughs over the past 10 years have led to an awakening among board members about their roles and responsibilities—and their ability to use sound governance principles to embrace opportunities and address risks.

 

Vogel welcomes that oversight.

 

“We can no longer leave it to the engineers to figure out how to create the efficiencies and answer humanity’s most challenging questions while coding at a rapid clip,” she said. “We need to start answering these questions together.”

 

 

 

AI governance: a manageable challenge

 

Boards that stay true to governance principles should be confident about AI, according to this video featuring Grant Thornton Chief Strategy Officer Chris Smith.

 

3:30 | Transcript

 

Contact:

 
 
 

Our fresh thinking