CTA intros new trustworthiness standard for healthcare AI

The Consumer Technology Association on Wednesday unveiled a new ANSI-accredited standard that it says represents the “baseline to determine trustworthy AI solutions in health care.”

WHY IT MATTERS
More than five dozen healthcare and technology organizations helped develop the consensus-driven standard, according to CTA, which says the aim is to identify “the core requirements and baseline for AI solutions in health care to be deemed as trustworthy,” as artificial intelligence and machine learning become “pervasive” across healthcare.

“Additionally, it explores the impact of the trustworthiness of AI in health care through the lens of the end user (e.g., physician, consumer, professional and family caregiver, public health, medical societies, and regulators) and will identify the unique challenges and opportunities for AI in the health care sector,” according to CTA.

Known as ANSI/CTA-2090, “The Use of Artificial Intelligence in Health Care: Trustworthiness” – considers what the association says are the three key areas relating to how trust is created and maintained across stakeholders.

Human trust is concerned with developing “humanistic factors that affect the creation and maintenance of trust between the developer and users,” according to CTA. “Specifically, human trust is built upon human interaction, the ability to easily explain, user experience and levels of autonomy of the AI solution.”

Technical trust is focused on design and training of AI and machine learning systems, ensuring they “deliver results as expected.” Additionally, it considers data quality and integrity – such as algorithmic bias, security, privacy, source and access.

Regulatory trust, meanwhile, is “gained through compliance by industry based upon clear laws and regulations,” said CTA, whether that’s from accreditation boards, regulatory agencies, federal and state laws or international standardization frameworks.

“The nature of AI can make people suspicious of product performance, specifically in health care applications,” the association notes. “Many factors go into earning and sustaining trust in an AI health care product or application and these factors vary depending on the type of use and end user/stakeholder.”

THE LARGER TREND
CTA says membership in its Artificial Intelligence in Health Care working group has doubled over the past few years, now including representatives from more than 60 member organizations.

Among some of the participants are: AdvaMed, America’s Health Insurance Plans, athenahealth, BlackBerry, Connected Health Initiative, Duke Margolis Center for Health Policy, Federation of State Medical Boards, Humetrix, Philips, ResMed, Roche, the Joint Commission, Validic and Viz.ai.

“Establishing these pillars of trust represents a step forward in the use of AI in healthcare,” said Pat Baird, regulatory head of global software standards at Philips and co-chair of the working group, in a statement.

“AI can help caregivers spend less time with computers, and more time with patients. In order to get there, we realized that different approaches are needed to gain the trust of different populations and AI-enabled solutions need to benefit our customers, patients and society as a whole. Collaboration across the healthcare ecosystem is essential to establish trust.”

The new ANSI standard comes a year after CTA put forth another healthcare AI-focused spec designed to provide common terminology so that consumers, vendors and care providers can better communicate about emerging technologies as they’re developed and put to work.

Billed as the first of its kind, the standard defines terms such as assistive intelligence, synthetic data and others related to how artificial intelligence is used in healthcare.

ON THE RECORD
“AI is providing solutions – from diagnosing diseases to advanced remote care options – for some of health care’s most pressing challenges,” said CTA CEO Gary Shapiro in a statement. “As the U.S. healthcare system faces clinician shortages, chronic conditions and a deadly pandemic, it’s critical patients and health care professionals trust how these tools are developed and their intended uses.”

Leave a Reply

Your email address will not be published. Required fields are marked *