ILaw Logo blue text, transparent background
AboutpeopleexpertiseNewsTestimonialsCareersContact
ILaw Logo blue text, transparent background

The AI Health Scare in the Medical Sector

November 14, 2023
Insight
No items found.

UCL Medical School Report Important Regulatory Concerns.

The integration of artificial intelligence (AI) in the medical industry comes with regulatory and standardisation challenges. A study conducted by the University College London Medical School identified several key areas that require further regulatory clarity:

1. Liability Concerns – There is uncertainty as to who is responsible if things go wrong with AI systems, as multiple parties are involved through their creation and operation, including software developers, manufacturers, clinical staff, and patients. 

2. Risk Classification Challenges – Existing medical device risk classification does not account for devices such as AI that change their output over time in response to new data. This raises doubts about the safety of medical devices and may place a burden on patients and clinicians to determine their safety. 

3. Cybersecurity Vulnerabilities – Cybersecurity attacks can fatally disrupt the basic functions of medical devices embedded within hospital networks and therefore compromise patient care. Many medical practices have critical security vulnerabilities due to unclear password protection guidelines, device management, and software update policies. (Cynerio reported that 50% of medical devices have critical security vulnerabilities.)

4. Interaction Between New Medical Devices and Legacy Components – Legacy components (outdated hardware or software still in use) are challenging to update, and many do not meet modern security requirements. Linking them to new devices increases the risk of cyberattacks, and responsibility for implementing security updates is often clear. 

5. Algorithmic Explainability and Transparency – Unlike traditional medical device software, AI continuously updates its decision-making with new data, making it harder for the public to understand how decisions are made. Standards are needed to address transparency, explainability, and accountability of AI systems to ensure patient confidence and safety. 

6. Understanding and Assessing Bias – As humans are the source of data for AI training, any bias integrated into human data can transfer into the decision-making of AI. This bias in healthcare could, for example, lead to misdiagnosis of particular patient groups. Therefore, it is important to have clear guidelines on the appropriate training data for AI systems. 

7. Responsible Data Management – Standard and regulatory guidelines lack measures on data quality and integrity controls for AI-based medical devices, which is essential for the accuracy, validity, and removal of bias in its decision-making.

Regulatory frameworks for AI in medical devices are being redesigned in various jurisdictions. In the UK, the Medicines and Healthcare Products Regulatory Agency (MHRA) is currently updating its medical device regulation and working on an initiative called the “Software and AI as a Medical Device Change Programme”.

As AI becomes more commonplace in healthcare and other industries, navigating the evolving regulatory landscape becomes increasingly complex. Seeking legal advice on these matters is advisable for those involved in the field. Read our article about AI trends in the medical sector here.

About the author

Share

Latest News

More from