AI solutions can introduce new risks in any industry — but in healthcare the stakes might be higher.
“When it's healthcare, it's not a number, it's not a spreadsheet, it can be a person and their life,” said Grant Thornton Strategic Assurance and SOC Services Manager Sean Brennan. “The implications of unsecure, unsafe, ineffective AI could lead to human consequences, beyond a financial repercussion for a corporation or entity. There's a big difference there.”
Beyond maintaining HIPAA compliance, healthcare solutions need to fend off a range of emerging cybersecurity and data privacy threats. When an AI solution analyzes, processes and outputs protected health information, it’s important to ensure the necessary controls are in place to protect that data against the emerging threats.
AI security considerations
That’s why healthcare organizations need to continually evaluate the security around their AI solutions. “Accordingly, the proper risk-management formulation usually begins with the governance controls that are in place to ensure that bad things don't happen — or, if they happen, that they can be detected and remediated quickly,” said Grant Thornton Risk Advisory Principal Johnny Lee. “For healthcare companies, the initial inquiry should focus on whether the company has the rights to use data input into a system as inputs to an AI model. More specifically, does the organization have the requisite consent from the individuals whose personal health information may be used in this fashion?”
The questions become even more complex if an AI solution is provided by a third-party vendor, or if that solution interacts with another third-party solution. “Do you know what your downstream partners or, in the HIPAA language, your business associates are doing with these data?” Lee asked. “Because eventually, liability likely will attach to the company charged with safeguarding the data and, more often than not, this may be your organization. If you haven't established enough structure around your internal controls for data flows — and your business associate agreements don’t outline clear usage rights, consent considerations, and other requirements — then your organization can be dragged into litigation and related regulatory scrutiny. Even if you're not found ultimately liable, such proceedings are painful, distracting, and very expensive.”
For these reasons, the risks of an AI solution should be contemplated as a category of enterprise risk.
Enterprise risk mitigation
To sufficiently understand and manage the risks in AI solutions, healthcare organizations need to look across the impacted areas. “AI is not a standalone technology that you're bolting into one system,” Lee said. “Over time, this will become integral to numerous processes — and, by extension, to numerous systems, some of which are potentially managed by third parties.”
That means organizations need to get input and buy-in from areas outside of the technology team. “AI governance done well involves organizations treating AI as a category of enterprise risk — not something that lives in a single silo or functional area,” Lee said. “You need to bring a multidisciplinary team, establish enterprise-wide governance structures and reporting, memorialize key decisions, and provide transparency to the board on adoption over time, just like you would for your cybersecurity initiatives.”
To ensure you have the appropriate controls and communicate that assurance to board members, third-party partners and others, healthcare organizations can apply the HITRUST framework that recently expanded to include a certification for AI Systems.
HITRUST is known as a strong information security and protection framework, creating a standard that organizations and stakeholders can both understand and trust on a range of issues. “The framework is specifically designed to speak to those third-party risk concerns,” Lee said. “If your downstream vendors have a HITRUST certification — that is, if they have an independent assessor saying that they’re compliant with that benchmark — that's much more meaningful than a bespoke audit that might or might not provide meaningful assurance to the consumers of that product or platform.”
HITRUST recently released the HITRUST AI Security Assessment to complement its framework and certification products. However, the determination of whether an AI solution meets the new standard — and where or how it needs controls — is not always clear. The assessment is much more than a simple checkbox evaluation.
Trust that’s bigger than a checkbox
“AI technology is not just the same system everywhere you go,” said Grant Thornton Strategic Assurance and SOC Services Principal Brad Barrett, who is Grant Thornton’s National HITRUST Practice Leader. To address the complexities of AI technology and today’s security threats, the HITRUST framework provides some specific guidance that requires skilled application.
“HITRUST is prescriptive, but it's not dogmatic in the way that other security standards are,” Lee said. As you apply or assess the HITRUST framework, it’s important to understand its intentions, broader implications and other dependencies across the organization. “If you simply see HITRUST as a checklist, it's not long before you're conflating a checklist item with meaningful internal controls. They are not at all the same thing,” Lee said.
To truly mitigate the risks and employ effective controls for the items in the HITRUST framework, it’s important to identify the nexus of how internal controls address specific risks within each organization. Lee said, “You need to show that connectivity, connecting the dots between discrete risks and actual control activities. It’s not strictly an exercise in math, but it also shouldn't resemble witchcraft. The artful science involved maps controls to risks in a manner that allows intelligent business decisions to determine whether the mitigation for each risk is acceptable.”
To understand the risks and the controls that map to them, assessors and their client counterparts need to have an understanding that spans the organization’s people, processes and technologies. They also need to have a dialogue about existing controls and how they actually work within the organization. Only then is it meaningful to map these inputs to the HITRUST framework.
Organizations that are new to HITRUST certification might be able to map many of the required controls to existing SOC 2 reports or other audits and assessments they already have. “You could be looking at relatively minimal effort to get that initial certification, whether it’s an e1 or i1 report,” Brennan said. “Then, you can add on the AI Assessment to provide valuable assurance to your vendors or third parties.” It’s important to ensure you have a depth of knowledge and experience that can guide you through a HITRUST certification, beyond a surface-level evaluation that leaves opportunities and risks undiscovered.
Strategic assurance
Recognizing that HITRUST certification is more than a simple evaluation, organizational leaders should prepare for a strategic path to assurance. There are many best practices, as well as pitfalls, to avoid.
Barrett recalled an organization that had prepared for HITRUST certification and initiated their first assessment. “They had service problems, quality problems and said that it felt harder than it needed to be. They needed someone with a better level of service, broader business knowledge and more ability to get answers from HITRUST.” Another organization achieved HITRUST certification in one area, then wanted to expand it to the rest of the enterprise. “They were able to completely adopt it across the organization and get full buy-in, because they finally found the right partner to make it work — prior partners just weren't able to get that done.”
As organizations look at broader certifications, they might even be able to combine assessments. “Organizations can often combine their various compliance efforts into one using the HITRUST framework. We are always looking to leverage existing efforts to satisfy multiple assessments,” Barrett said. “It can reduce the audit burden while still complying with HITRUST and staying on the leading edge of the latest versions. That’s where an assessor can be a partner, not just someone who comes in at the last minute to execute the bare minimum of the actual assessment.”
With a strategic path to HITRUST AI certification, healthcare organizations can address AI risks while aligning themselves to expand AI adoption and create sustainable compliance across the enterprise.
Content disclaimer
This Grant Thornton Advisors LLC content provides information and comments on current issues and developments. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.
Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.
For additional information on topics covered in this content, contact a Grant Thornton Advisors LLC professional.
More healthcare insights
No Results Found. Please search again using different keywords and/or filters.
Share with your network
Share