AI use case prioritization enables maximum benefits
The time for boards to gently dip their toes into the AI waters has long passed. As important as a firm’s general grounding is, directors need to pivot to setting clear objectives, creating an AI ethics and governance framework, and establishing guardrails to ensure the organization’s AI adoption aligns with its legal, compliance and strategic directives.
“Some companies have integrated functions like Microsoft 365 Copilot and Anthropic’s Claude into their operations, and they think they’ve integrated AI,” said Grant Thornton Risk Advisory Services Principal Ethan Rojhani. “They have a tool, and they’re using it. But that barely scratches the surface of what needs to be done to make the best use of AI at any organization.”
At many organizations, the next step in AI governance after piloting a solution is a more systematic approach to prioritizing AI uses. Such an approach highlights which AI initiatives provide the highest benefits and therefore deserve funding. To provide proper governance over AI funding, boards should be asking management to identify:
- Whether the organization has a robust data governance program in place to enable effective use of other tools, such as AI.
- How ready the organization is to employ its institutional data stores to train AI models.
- Which processes would benefit most from AI implementation.
- The availability of quality data necessary to support AI efforts for those processes.
- The risks related to bringing AI into those processes.
- After these factors are assessed, which specific technologies are best applied for those use cases.
Unfortunately, many companies aren’t approaching these objectives with a well-designed process.
“If you sit on more than one board, you know that no two companies are approaching AI implementation the same way,” Rojhani said. “They’re either focused more on the technology than the risk, or they’re focused more on the risk than the technology. Many times, they’re not even getting out of the gate.”
Other times, they’re getting out of the gate with AI initiatives that don’t match their greatest needs — many are using AI in their lowest-risk, lowest-impact environments. As a result, they’re not optimizing their AI use and they’re missing valuable opportunities.
When companies do follow a well-designed, structured AI adoption process, though, their methods for doing so might vary. Organizations with strong governance, leadership and technology capabilities might choose to do this on their own. Others turn to third parties with deep understanding of AI.
Either way, understanding processes, data, risks and the best available technologies is essential for choosing the right AI initiatives.
Take inventory of processes, data and risks
The first step in identifying where AI can most help a business is to create a process inventory.
Typically, organizations get the most return on an AI investment that’s applied to processes that are highly repetitive, follow clear rules or patterns, and require a lot of human time to perform — but not much human time to verify. Depending on the company, these processes might include:
- Data entry and processing
- Invoice processing
- Customer support/call center support
- Contract review
- Document generation
- Inventory monitoring and management
Once these processes are identified, management needs to assess the maturity of the associated data supporting the processes. In homing in on a viable AI use case, management should favor supporting data that is accurate, complete, timely and consistent across different sources and periods.
This can be a problem for leadership, as Grant Thornton’s CFO survey for the first quarter of 2025 shows that for many organizations, data quality is a barrier to AI implementation. Although two-thirds of finance leaders said their data is at least adequate to support digital transformation, one-third expressed serious reservations about the quality of their data.
Boards should encourage management to pursue AI initiatives despite data limitations. While the most promising AI use cases often rely on well-structured data, leaders can refine and enhance specific datasets for AI projects without requiring a costly, enterprise-wide data overhaul.
“Get your use cases winnowed down to five top choices, and then look at the data undergirding those processes,” said Grant Thornton Risk Advisory Services Principal Johnny Lee. “You isolate the data requiring improvement without an all-encompassing data renovation project that would take months or years to complete.”
After identifying the ideal processes and the data requirements, the initial step is likely a pilot or proof-of-concept exercise — not a fully funded, long-term initiative. Once a pilot demonstrates the promise of plausible ROI, management should consider a more formalized risk assessment to confirm alignment.
This assessment should align with the AI ethics and governance guidelines, within the guardrails the organization has created. The risk assessment typically should consider:
- Privacy and security
- Regulatory compliance
- Transparency
- Accountability
- Fairness and ethical considerations consistent with the organization’s principles
- Reliability and resilience
“For example, if you’re using data that’s not well-governed or highly constrained by privacy, ownership or intellectual property concerns, the use case might expose you to risks that outweigh the potential benefits,” Lee said. “The risks related to each use case need to be thoroughly assessed before you fund full-blown initiatives.”
Find the right AI solution
Once the optimal use cases are identified for AI initiatives, management needs to focus on selecting the right applied technology for performing the tasks those processes require.
Boards need to make sure that professionals with a comprehensive knowledge of AI tools — whether they’re employed in-house or by a third party — recommend the right tool for the job. If management is using ChatGPT or Copilot for every AI-related implementation, there’s a good chance that the organization is missing out on tools better suited for specific processes.
For example, an organization that chooses to automate contract review might use a tool such as Harvey.ai, a large language model designed by lawyers for lawyers and the work that in-house attorneys perform every day. Harvey.ai can analyze contracts, spot key legal terms, and even propose draft contract clauses, dramatically reducing the time counsel spends reviewing contracts.
“Instead of having an associate reading through contracts for many hours, a properly trained technology can home in on key areas of concern within minutes,” Lee said. “It’s a fit-for-purpose AI tool, and there are a lot of tools like that out there.”
In some cases, organizations might be selecting pilots or full-blown use cases using agentic AI, which uses advanced reasoning capabilities to operate independently and can interact with other tools and software — creating next-generation functionality. The output of these tools still needs human review, of course. But where an ordinary chatbot in customer service would answer basic questions, an agentic AI system might also check account balances, recommend payment options and complete transactions based on user decisions.
Before AI technology is deployed at scale — whether purchased or developed internally — boards should make sure that management applies a risk lens. All tools should be examined for privacy and security controls, with boards ensuring that management has a process for verifying that any internally developed or third-party tools meet the requirements set forth in the organization’s guardrails.
Once management has chosen the best AI initiatives, they can be brought forward for funding, testing and full implementation.
Integrating AI into board oversight operations
While overseeing management’s AI implementation, boards also have continuing opportunities to upgrade their own use of AI. This can help directors enhance their own effectiveness and efficiency while improving their understanding of how AI tools work.
The first generation of AI-assisted board governance tools focused on summarizing key materials such as management reports, financial statements and competitor analyses. Now, board members can train AI to provide real-time alerts on industry news, organizational updates, competitive shifts and other critical developments that might otherwise go unnoticed.
“Boards might want to consider integrating AI into their board operations to make sure they’re getting important information in a timely manner,” Rojhani said.
New AI tools that are useful to boards and the organizations they oversee will continue to emerge as innovators develop new technology and add new features to existing applications. Because of this, the structured methodology for determining a company’s best AI use cases will be ongoing — and the same can be said of the best use cases for individual directors.
It’s likely that there will always be more opportunities for improving processes through technology. When boards make sure that management selects the best AI use cases over time, they provide oversight that gives their organizations an edge in a landscape where the most agile competitors will constantly be seeking their own AI-related improvements.
“Boards have spent a good bit of time getting their AI guardrails in place,” Lee said. “Now they need to have substantive conversations with their management teams about how they plan to use AI. How are they choosing proof-of-concepts? Why choose one use case over another use case? And how are they making sure the data is secured for these use cases? The boards that require a structured approach to this can help their organizations succeed.”
Content disclaimer
This content provides information and comments on current issues and developments from Grant Thornton Advisors LLC and Grant Thornton LLP. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC and Grant Thornton LLP. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.
For additional information on topics covered in this content, contact a Grant Thornton professional.
Grant Thornton LLP and Grant Thornton Advisors LLC (and their respective subsidiary entities) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations and professional standards. Grant Thornton LLP is a licensed independent CPA firm that provides attest services to its clients, and Grant Thornton Advisors LLC and its subsidiary entities provide tax and business consulting services to their clients. Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.
Trending topics

No Results Found. Please search again using different keywords and/or filters.
Share with your network
Share