The European Society of Radiology answered to the request for feedback on the definition of AI systems and prohibited practices launched by the AI office in its efforts to implement the EU AI Act.
On the definition of an AI system under Article 3(1) and Recital 12 AI Act
The phrase “designed to operate with varying levels of autonomy” must be explained more thoroughly, given the huge diversity of the systems, from input-output model to sophisticated agent-based or multi-agent systems, that are considered under the auspices of this generic definition. In such a wide range, several healthcare applications will also fall – including fracture detection tools and other, considerably more autonomous AI systems. We therefore call for further clarification and distinction on the levels of autonomy, in the context of healthcare scenarios – this is especially important for the determination of human oversight and accountability-related obligations, especially in high-risk applications. The term “may exhibit adaptiveness after deployment” also requires detailed explanation. Adaptiveness may range from no adaptation in a system to some restricted ability within set boundaries and finally to a fully adaptive system able to evolve beyond its initial programming. In healthcare practices, it is relevant to distinguish between the variances of adaptiveness, as overly adaptive systems bear the danger of departing away from their intended original purpose. In line with the AI Act risk-based approach, guidelines should categorize these levels of adaptiveness and indicate in which cases such changes will need re-certification or increased regulatory control.
On the prohibition of harmful manipulation and deception
The element “deploying subliminal, purposefully manipulative, or deceptive techniques” needs further clarification, especially in the healthcare context where AI could inadvertently bias decision-making processes without explicit intent. In medical imaging, if an AI system provides results that seem accurate but are erroneous, it can lead to doctors changing treatment pathways. These changes may not be the direct result of overt manipulation,and as such, they constitute a type of influence that does not necessarily fit neatly into conventional definitions of deception or manipulation but can materially affect decision-making in ways that ultimately harm patients. Therefore, it is essential that guidelines explicitly address whether unintended but impactful AI errors, like hallucinations, meet the criteria for deploying subliminal, purposefully manipulative, or deceptive techniques, especially in the context of high-risk healthcare applications. Similarly, the criterion of “significant harm” might benefit from additional guidelines considering the gradual accumulation of negative impacts of erroneous AI systems outputs on patient-care. For instance, delays in diagnosis or inappropriate treatments, while not immediate, can significantly compromise patient health and safety over time. To conclude, the current wording needs further elaboration in order to provide stakeholders with clearer criteria for assessing whether an AI system crosses the threshold into prohibited practice.
One practical example involves AI systems used in radiology for triaging imaging cases based on urgency. These systems, while beneficial for workflow efficiency, could unintentionally influence decision-making by prioritizing patients based on biased or incomplete data. This prioritization may inadvertently distort clinician behavior, even if there is no deliberate intent. Similarly, AI-powered imaging tools used to highlight specific features can sometimes “hallucinate” some that are not actually present. These false positives can lead radiologists towards incorrect diagnoses, distorting clinical judgment. Lastly, AI systems that modify language in radiology reports, while striving to ensure readability and consistency, could inadvertently change the nuance or interpretation of a medical report, leading to unintended consequences in downstream care. Detailed guidelines spelling out how “significant harm” to patient-safety might occur overtime would account for the compounding risk-effect of AI systems used in healthcare. Depending on the use case, certain health risks associated with AI systems may only become apparent after extended periods, or can only emerge analyzing large datasets. Similarly, the boundaries of “manipulation and deception” should be clarified in order to establish whether an AI system used in healthcare would fall under the prohibited practices, or under the high-risk classification – which is strictly regulated under Article 6 and Annex III of the Act.
On the prohibition of harmful exploitation of vulnerabilities
The current wording “exploiting vulnerabilities due to age, disability or specific socio-economic situation” would benefit from additional clarification to fully encompass the nuanced discriminatory effects AI systems can produce in healthcare. Specifically, AI systems in radiology may unintentionally exploit vulnerabilities of marginalized or underrepresented groups due to biases ingrained in their training datasets, or in the under-representation of certain groups in the datasets themselves. The existing language does not sufficiently highlight these risks, especially when the impact may not be intentionally manipulating or deceiving, but stems from dataset limitations. Additionally, the current wording fails to address the gender dimension of vulnerabilities, which plays a crucial role as a health determinant in diagnostics. Accounting for gender is essential when evaluating AI systems used in healthcare, particularly in their roles as decision-making support tools and prioritization triage systems. AI systems may amplify pre-existing biases, thereby unintentionally “exploiting” vulnerabilities through inequitable clinical outcomes. In conclusion, more specific and inclusive guidelines should be developed to clarify how the regulation addresses indirect discrimination, the use of non-representative datasets and biased training models, and the exacerbation of health inequalities.
Radiological triage tools using AI are designed to read and interpret imaging procedures based on perceived urgency, but these systems may disproportionately impact vulnerable populations. AI systems, and in particular triage tools, can also inadvertently perpetuate discriminatory effects based on gender or race. Recent studies have indicated that women are sometimes deprioritized or misdiagnosed in medical settings, a trend that can be amplified when AI models learn from biased human-made systems. The same is true for individuals of lower socio-economic status, who may experience disparities in care if AI tools are built on historically biased datasets. In both cases, their scans might result in being assigned lower priority, which delays diagnosis and treatment, leading to potential harm. If an AI system learns from existing human-based processes that already harbor these biases, the discriminatory effects may be further enhanced rather than corrected. This unintended consequence risks undermining the principles of equity in healthcare. Further guidance should help in assessing when AI systems fall under prohibited use due to significant harm risks, or when they should be designated as high-risk with specific regulatory oversight requirements. This way, we can prevent misuse and discriminatory effects on vulnerable groups, and maintain trust in AI-assisted healthcare – both in patients, and medical professionals.
On the prohibition of social scoring
If AI system results from radiology are shared with insurance companies, the prohibition of social scoring may need to be clarified. This is because such evaluation or classification of individuals based on inferred or predicted characteristics could lead to detrimental treatment, such as higher premiums or denial of coverage. It needs to be clarified that AI systems used in healthcare will not result in unjustified or disproportionate social scoring by third parties, for instance, insurers. Safeguards should be highlighted in the guidelines that would prevent any misuse of AI-generated medical data out of context, by ensuring it does not get into prohibited social scoring.
On the prohibition of biometric categorisation to infer certain sensitive characteristics
AI systems could, in theory, be used to stratify medical risks by analyzing electronic patient records and integrating biometric data. For example, AI could infer risk factors-such as vulnerability to specific diseases-based on patterns derived from biometric markers. However, clarification is needed to make sure such systems do not inadvertently categorize individuals based on sensitive attributes, even if those attributes emerge as by-products of medical analyses.
On the interplay with other Union legislation
AI systems used for triaging medical images risk causing unfair prioritization and unequal treatment, which are prohibited under both the AI Act and GDPR. Specifically, systems trained on biased datasets, with insufficient oversight and overreliance due to heavy workloads, can unduly influence medical decisions, impairing informed choices (Article 5(1)(a), AI Act) and exacerbating healthcare inequities for vulnerable groups (Article 5(1)(b), AI Act). This would also be contrary to the GDPR requirements of fair and transparent processing in the interest of error minimization, and prevention of discriminatory impacts (Recitals 71 and 75, GDPR). Such AI-integrated medical devices pose risks to patient safety and health, requiring careful monitoring and the establishment of preventative measures in place (Article 98, MDR). Monitoring and diagnostic AI systems that adapt through continuous learning can also become unsafe if they exceed their original purpose, leading to erroneous data inferences that misguide medical professionals (Article 5(1)(a), GDPR) and compromise patient care. Aligning definitions of “significant changes” under the AI Act and MDR (Articles 10(9), 23(2), MDR) is essential to regulate adaptive AI systems and prevent ambiguity about re-certification. Clear guidelines on consistent implementation of transparency, data governance, and oversight across the AI Act, MDR, and GDPR are vital to build trust and clarity in AI use within healthcare.