Abstract
Amidst rapid advancements in artificial intelligence and machine learning-enabled medical devices (AI/ML-MD), this article investigates the regulatory challenges highlighted in the current academic literature. Using a PRISMA-guided scoping review, 18 studies were selected for in-depth analysis to highlight the multifaceted issues in regulating AI/ML-MD. The study's findings are organized into key themes: adaptive AI/ML, usability and stakeholder engagement, data diversity and use, health disparities, synthetic data use, regulatory considerations, medicolegal issues, and cybersecurity threats. The scoping review reveals numerous challenges associated with the regulation of AI/ML-based medical devices, reflecting various sustainability pillars. The study advocates for integrating sustainability principles into the materiovigilance ecosystem of AI/ML-MD and proposes a novel sustainable ecosystem for AI/ML-MD materiovigilance. This proposed ecosystem incorporates social, economic, and environmental sustainability principles to create a comprehensive and balanced regulatory approach. By presenting a thorough analysis of regulatory challenges, the study provides policymakers with a nuanced understanding of the complex landscape surrounding these technologies. This insight enables the development of informed strategies and solutions to address regulatory gaps and ensure the safe and effective integration of AI/ML-MD into healthcare systems.
1 Introduction
Industry 5.0 (I5.0), also known as the fifth industrial revolution, has been gaining momentum in industry and academia since its inception. I5.0 complements and moves in a continuum with Industry 4.0 (I4.0), acknowledging the fact that I4.0 had less focus on sustainable development and social fairness. I4.0 focuses on adopting technology to achieve interoperability, decentralization, modularity, real-time capability, virtualization, and service orientation [1]. I5.0 embraces sustainable development through a value-driven approach rather than a process-driven approach I5.0 encompasses three pillars: human centricity, sustainability, and resilience. Human centricity focuses on promoting talents, empowerment, and diversity. Sustainability aims to develop circular processes (circular economy) by preserving the planet, and resilience focuses on building adaptable, agile, and flexible systems foreseeing natural emergencies and shifts in geopolitics [2]. Healthcare is also revolutionizing as Healthcare 5.0 (H5.0), embracing novel technologies of I5.0. H5.0 holds vast potential and opportunities, encompassing personalized medicine, advanced diagnostics, telemedicine, and a shift toward more patient-centric care. These advancements are facilitated by the integration of cutting-edge technologies such as artificial intelligence (AI), big data analytics, blockchain, and robotics [3]. While Healthcare 4.0 focuses on the use of innovative digital technologies for real-time customization of healthcare provided to patients [4], H5.0 aims to deliver more personalized and sustainable services [5]. A few of the scenarios of H5.0 are: machine learning models assist in diagnosis based on real-time patient data and also send alerts when doctor intervention is required; high precision surgeries performed by collaborative robots (cobots) [6] following doctors' inputs; assistive technology for handicapped and disabled like personalized wearables, exoskeletons, prosthetics; extended reality enabled health education and remote healthcare [3]. Nevertheless, obstacles such as healthcare operational procedures, the verifiability of prediction models, resilience, and the absence of ethical and regulatory frameworks pose potential challenges to the attainment of Healthcare 5.0 [7].
Transitions from conventional healthcare to smart healthcare have brought rapid transformations in the use of medical devices (MD) [8]. The term medical devices includes a wide range of devices designed to be used for medical purposes. It includes instruments, apparatus, machines, appliances, implements, implants, reagents for in vitro use, software, materials, or other similar or related articles. These devices are proposed to be employed alone or in combination and do not primarily achieve their intended action through pharmacological, metabolic, or immunological means within or on the human body [9]. There has been a notable surge of interest in artificial intelligence and machine learning-based medical devices (AI/ML-MD), with a substantial increase in the number of approved devices [10]. The rise of commercially available AI/ML-MD designed to diagnose, mitigate, treat, cure, or prevent diseases has gained momentum, and this can be attributed to the substantial digitization of healthcare data, the extensive production of hardware such as sensors and wearables (Internet of Things—IoT), and the maturation of deep learning [11]. For instance, AI-embedded wearables enable the seamless gathering, storage, transmission, and processing of data. By leveraging past observations and incorporating associated data such as diet, activity, and medications, the software is able to do forecasting and predictions through AI algorithms [12].
Despite its promising potential, the integration of AI in clinical practices faces numerous challenges, which include concerns related to transparency in software programs, inherent biases in the data inputs, and considerations of data security. The regulation of these technologies emerges as a crucial element in shaping and addressing these obstacles effectively [13]. The necessity of developing regulatory guidance for AI/ML-MD is widely acknowledged, prompting numerous regulatory initiatives at national, regional, and global levels. Regulatory authorities such as the U.S. FDA, alongside international standards organizations, notably ISO, are actively engaged in shaping frameworks to address this imperative. The U.S. FDA uses a risk-based classification of medical devices. Premarket approval (for high-risk devices; stringent pathway), 510(k) premarket notification, and De Novo request (low and moderate risk) are common regulatory pathways for AI/ML-MD approval in the U.S. [14]. Likewise, the EU uses a Conformité Européenne mark as a denotation of AI/ML-MD approval [10]. On the contrary, countries such as India are still in the early stages of developing and implementing AI/ML-MD regulations.
In light of the growing relevance and evolving nature of AI/ML-MD, there arises a critical imperative to tailor regulatory mechanisms and monitoring pathways in a manner that is both resilient and adaptable. The FDA's 2019 framework proposes a premarket review approach for AI and machine learning software modifications, requiring a total product lifecycle (TPLC) regulatory method. This TPLC approach, based on the Digital Health Software Precertification (Pre-Cert) Program, evaluates Software as a Medical Device (SaMD) products throughout their lifecycle. It involves assessing a company's quality and organizational excellence to ensure high standards in software development, testing, and performance monitoring. This ensures the safety and effectiveness of both the organization and its products, giving patients, caregivers, and healthcare professionals confidence in their use. The TPLC approach supports ongoing evaluation and monitoring from premarket development to postmarket performance [15]. In 2021, the FDA, Health Canada, and the UK's MHRA identified ten guiding principles for good machine learning practice (Fig. 1). These principles support the development of safe, effective, and high-quality AI/ML technologies that learn from real-world use and can improve device performance. They also highlight areas where international bodies like the IMDRF and standards organizations can advance good machine learning practice [16,17].
Within this context, this article endeavors to understand and comprehend the intrinsic challenges associated with the regulatory control of AI/ML-MD as presented in the academic literature. Moreover, it posits that the examination of AI/ML-MD should be contextualized within the paradigm of sustainability principles advocated by H5.0. This perspective underscores the implications for environmental, societal, and economic healthcare sustainability. Drawing upon the key challenges, this paper aims to curate a sustainable ecosystem, incorporating various characteristics essential for regulatory considerations.
The paper is structured as follows: Sec. 2 explains the methodology, followed by a narration of the key insights and the conceptualization of a sustainable ecosystem for AI/ML-MD materiovigilance. The article concludes with a section featuring contributions, limitations, and future implications.
2 Method
To address the research inquiries, a scoping review methodology was selected due to its ability to offer a comprehensive view of existing literature without necessitating an exhaustive analysis. This systematic approach facilitates the synthesis of knowledge from available literature, pinpointing particular themes [18]. Scoping reviews endeavor to offer an initial glimpse into an emerging subject while also highlighting gaps in the current research literature [19]. A PRISMA-guided scoping review, following the guidelines of Tricco et al. [20], is used, and the review was conducted in January 2024. SCOPUS offers a more extensive selection of subject areas and categories compared to other databases, allowing scholars to more effectively find journals that are most pertinent to their research field. SCOPUS is capable of curating a vast collection of papers in emerging fields from reputable journals [21]. A structured search with Boolean operators using the advanced search option in the SCOPUS database is considered. The keywords were finalized after a preliminary review of the literature. A search was conducted using the keywords: (“artificial intelligence” or “machine learning” or “deep learning”) and (“medical devices” or “wearables”) and (regulat*). The search was confined only to the title, abstract, and keywords. The study aims not to conduct a narrative review of existing regulations but rather to comprehend the challenges and explore potential solutions within the already established regulatory frameworks. Thus, the studies were deemed eligible for inclusion in the review if they reviewed regulatory challenges and provided policy implications. It offers a comprehensive synthesis of existing knowledge and insights across a broader scope. The exclusion criteria included studies that were not review papers to maintain sample homogeneity [22]. Additionally, studies published in languages other than English were omitted to ensure inclusivity. To maintain quality, articles published in SCOPUS quartile two, three, and four, as well as those not indexed in SCOPUS, were excluded. Finally, studies focusing primarily on regional regulations were excluded to uphold the specificity of the review's objectives. These criteria were meticulously designed to refine the scope and ensure a targeted exploration of pertinent themes [22]. The search results were transferred to Microsoft Office 2021 Excel software. Initially, duplicate records were removed. Subsequently, articles underwent parallel screening based on title and abstract, employing eligibility criteria to ascertain alignment with our objectives. Two reviewers, S.K. and D.M., individually examined the titles and abstracts of all papers according to the predetermined criteria for eligibility. After reviewing the titles and abstracts, the reviewers compared the results and resolved any disputes through discourse. Finally, the full text of the chosen articles was thoroughly examined and assessed against the inclusion criteria. The PRISMA flow diagram is presented in Fig. 2.
3 Results
The initial search resulted in 411 records. After removing the non-English and nonreview articles, 110 records were screened for source quality, and 75 studies were removed from the analysis because the source quality did not fall within the top ten percentile in SCOPUS. The resulting 35 records were screened based on title and abstract, and only 27 records were sent for full paper retrieval. We were able to retrieve the full versions of all the records. Nine papers were omitted after full-text review as they did not align with the study objectives, and 18 records were considered for the final stage. The summary of the selected articles is presented in Table S1 available in the Supplemental Materials on the ASME Digital Collection. In Sec. 3.1 presents the bibilometric profile and Sec. 3.2, the insights obtained from the selected are narrated in various themes.
3.1 Bibliometric Profile.
During the analysis of the publication timeline, it was observed that the highest number of articles were published in 2023. Figure 3 illustrates the trend in publications over the years. Notably, journals such as the Journal of Medical Internet Research, Drug Discovery Today, and the Journal of the American Medical Informatics Association have emerged as the primary contributors, publishing the most significant number of articles on the subject. The outputs of keyword co-occurrence (to understand the relationship among the topics) are presented in Fig. 4. Table 1 presents the distribution of reviewed articles by journal.
Journal | Number of articles |
---|---|
Journal of Medical Internet Research | 3 |
Drug Discovery Today | 2 |
Journal of the American Medical Informatics Association | 2 |
Artificial Intelligence in Medicine | 1 |
Circulation: Arrhythmia and Electrophysiology | 1 |
Clinical and Translational Science | 1 |
Clinical Pharmacology and Therapeutics | 1 |
European Radiology | 1 |
Fertility and Sterility | 1 |
Frontiers in Medicine | 1 |
JAMA Oncology | 1 |
Journal of Medical Systems | 1 |
Radiology | 1 |
The Lancet Digital Health | 1 |
Journal | Number of articles |
---|---|
Journal of Medical Internet Research | 3 |
Drug Discovery Today | 2 |
Journal of the American Medical Informatics Association | 2 |
Artificial Intelligence in Medicine | 1 |
Circulation: Arrhythmia and Electrophysiology | 1 |
Clinical and Translational Science | 1 |
Clinical Pharmacology and Therapeutics | 1 |
European Radiology | 1 |
Fertility and Sterility | 1 |
Frontiers in Medicine | 1 |
JAMA Oncology | 1 |
Journal of Medical Systems | 1 |
Radiology | 1 |
The Lancet Digital Health | 1 |
3.2 Research Themes.
Table 2 summarizes the research themes that emerge from the reviews considered in this study. We identify seven main research themes, as detailed below.
Theme | References |
---|---|
Adaptive AI/ML | Ota et al. [25] |
Gilbert et al. [26] | |
Patil et al. [27] | |
Usability and stakeholder engagement | Farah et al. [28] |
Welzel et al. [29] | |
Alami et al. [30] | |
Data diversity and data use | Curchoe [11] |
Panagiotou et al. [31] | |
Tarakji et al. [32] | |
Health disparities | Ferryman [33] |
Use of synthetic data | Alloza et al. [34] |
Moingeon et al. [35] | |
Medicolegal issues and cybersecurity threats | Kelly et al. [36] |
Näher et al. [37] | |
Oliva et al. [38] | |
Regulatory considerations | Zhu et al. [14] |
Miller et al. [39] | |
Hernandez-Boussard et al. [40] |
Theme | References |
---|---|
Adaptive AI/ML | Ota et al. [25] |
Gilbert et al. [26] | |
Patil et al. [27] | |
Usability and stakeholder engagement | Farah et al. [28] |
Welzel et al. [29] | |
Alami et al. [30] | |
Data diversity and data use | Curchoe [11] |
Panagiotou et al. [31] | |
Tarakji et al. [32] | |
Health disparities | Ferryman [33] |
Use of synthetic data | Alloza et al. [34] |
Moingeon et al. [35] | |
Medicolegal issues and cybersecurity threats | Kelly et al. [36] |
Näher et al. [37] | |
Oliva et al. [38] | |
Regulatory considerations | Zhu et al. [14] |
Miller et al. [39] | |
Hernandez-Boussard et al. [40] |
3.2.1 Adaptive Artificial Intelligence and Machine Learning.
It is commonly assumed that MDs will be powered by Internet-connected AI algorithms, allowing them to be managed by marketing authorization holders. However, with advancements in computer specifications, future MDs may have AI algorithms within themselves. In such cases, MD behavior will vary based on different clinical situations, necessitating a quality management system. Marketing authorization holders would provide protocols for data collection, MD quality management, and behavior confirmation. In hospital settings, automatic modifications by MDs should be monitored by a clinical engineer, with responsibility shared among system integrators, designers, and MD designers for the safe and effective operation of AI algorithms-driven MDs. Under the new review system in Japan, the regulatory authorities will require clinical data and a postmarketing modification plan for parent MDs during the application review. After approval, modifications within the pre-approved plan may be confirmed by the regulatory body without a full clinical data review, providing patients with more timely access to modified devices while ensuring safety and efficacy [25]. In the absence of standardized procedures for premarket review of adaptive ML-based SaMD for algorithm change protocol in the EU, Gilbert et al. [26] propose to develop a comprehensive action plan involving public consultation, mirroring the FDA's 2020 approach.
It is difficult to regulate software-enabled MD as it requires high precision and validating software. In the absence of precise legislation, the current regulatory requirements include a demo of safety and performance, confirmation and validation of specifications for the proposed use, design of a program for reliability, repeatability, performance, technical equivalence, and ensuring the competence of associated personnel. The conventional regulatory pathways have failed to oversee the “adaptive” AI/ML-based SaMD, and hence, a holistic approach focusing on the total lifecycle of the MD is proposed [27].
3.2.2 Usability and Stakeholder Engagement.
Apart from assessing the safety and effectiveness of AI/ML-MD, Farah et al. [28] propose the need to consider the following factors: interpretability, cybersecurity, explainability, safety of ongoing updates, ethics, human–AI interaction organizational impacts, economic assessment, and quality of data management. The usability of tools holds equal significance to the quality of algorithms and other parameters. With healthcare evolving toward increased automation via digital systems, the effectiveness of these systems for both patients and providers should be quantifiable effortlessly through automated digital systems [29]. Addressing AI challenges in technological, clinical, human and cognitive, professional and organizational, economic, legal, and ethical dimensions is crucial. In this new context, continuous and collective learning is imperative. Creating the necessary political, regulatory, organizational, clinical, and technological conditions for innovation is the initial step, emphasizing the importance of building trust for stakeholder engagement. This engagement guides AI developments, facilitates rapid knowledge generation in real-world care settings, and enables the translation of lessons into actionable strategies [30].
3.2.3 Data Diversity and Data Use.
Data diversity is an important component of FDA prerequirements of AI/ML-MD. In the context of reproduction-related MD, Curchoe [11] highlights a crucial need to question what qualifies as adequate “diversity” in data. Determining biologically relevant types of diversity, including factors like race, ethnicity, socioeconomic status, age, and diagnosis, requires careful consideration. The challenge lies in establishing how to ascertain or demonstrate that datasets encompass a sufficient breadth of diversity tailored to the specific inquiries being explored. This critical assessment ensures that the data used for studies are comprehensive and representative of the diverse population relevant to the field [11]. Ensuring the responsible use of computational methods involves addressing key considerations: data quality, data diversity, computational reproducibility, face validity, risk-based SaMD regulatory approval pathway, prospective clinical utility trials, training of clinical oncologists, multidisciplinary boards, and requiring computational methods to recertify clinical staff [31].
Most FDA-cleared wearable devices are primarily designed for the general population rather than tailored for disease management in individuals with specific conditions. This approach poses limitations as the software, hardware, and workflow may not be well-suited for effective disease management. The utilization of these technologies, whether in the form of hardware or software as a medical device, for disease management is likely to necessitate additional FDA clearance. This is driven by various factors, including the need to validate the accuracy and performance of diagnostics in specific populations with varying disease prevalence. Beyond positive predictive value, assessing other diagnostic performance measures such as sensitivity, specificity, and negative predictive value becomes crucial. This information is especially vital when the data from these devices have actionable implications or lead to changes in therapy, potentially elevating the risk classification of the device. In cases where technologies are directly linked to treatment, such as the use of anti-arrhythmic or anticoagulation drugs (in the context of atrial fibrillation), there may be a requirement for combined drug-device approval [32].
3.2.4 Health Disparities.
The incorporation of AI/ML tools in healthcare brings forth safety concerns, particularly regarding health disparities. Neglecting potential adverse impacts can heighten the risk and danger for groups already marginalized and discriminated against in healthcare. Health disparities should not be overlooked or treated as an optional dimension when developing ML tools for medicine. To address this, considerations for health disparities can be integrated into the FDA's AI/ML regulation in four ways, encompassing both premarket (knowledge of health disparities, data bias review, group impacts, and performance decisions) and postmarket stages (health equity review) [33].
3.2.5 Use of Synthetic Data.
Synthetic data (artificially generated data) is widely used as a proxy for real-world data, but there is a lack of methodological and regulatory guidelines concerning the use of synthetic data for regulatory approvals [34]. In silico trials, simulation faces challenges as regulatory agencies' full acceptance of virtual patients, historical controls, and digital endpoints awaits the establishment of a regulatory framework. Fully integrative models of individual patients, built on extensive and multimodal data, may soon become feasible with the anticipated emergence of high-power computing, quantum computing, and exascale computing. It is essential to strengthen the credibility of digital evidence by verifying in silico model outputs with real clinical study results, both retrospectively and prospectively [35].
3.2.6 Medicolegal Issues and Cybersecurity Threats.
The significance of cybersecurity challenges persists in the realm of AI-enabled medical devices, primarily revolving around ethical considerations and privacy concerns associated with data in terms of ethics, access, querying, de-identification, storage, transfer, labeling, and training. Addressing these challenges can be achieved through a proactive approach such as regular auditing and logging, security control, dealing with legacy software, federated learning, and identifying security risks in the developmental phase [36]. Utilizing secondary health data in accordance with findable, accessible, interoperable, reusable principles is highly challenging due to the lack of international regulatory standards or guidance. The absence of standardized regulations poses obstacles to achieving equitable access and interoperability of secondary health data [37].
3.2.7 Regulatory Considerations.
Zhu et al. [14] highlight the importance of human interaction. AI/ML devices operate as “black boxes,” lacking transparency, and face inconsistencies between their intended purpose and FDA evaluation metrics. Implementation plans for regulations specific to AI/ML-based SaMD remain unclear for manufacturers. Although thoughtful designs for these regulations have been proposed, the execution details are not well-drafted. Zhu et al. recommend conducting multicenter and prospective randomized controlled trials to minimize bias and suggest informing the consequences of false-positive predictions. They further propose disclosing key information, including the network architecture, model parameters, and training data, to enhance transparency. Additionally, the FDA could establish an individualized framework for each SaMD, facilitating the monitoring of modifications.
The 510(k) third-party review programs by the FDA enable the clearance of low to moderate-risk MD for marketing [39]. Miller et al. propose to improve training, certification processes, and human capital retention, ensuring user fees and review timelines favor intra-FDA pathways. The goal is to automatically direct certain devices through the third-party process, allowing the FDA to concentrate regulatory efforts on higher-risk devices and newer technologies while maintaining high-quality reviews for patient protection. Following the FDA's move to exempt the class I and class II MDs, which include AI/ML-MDs, from premarket notification requirements, Hernandez-Boussard et al. [40] urge the FDA to reinforce safety regulations for AI-based medical software to safeguard fair and safe clinical decision tools.
4 Discussion
While AI systems are well-received, technically advanced, and highly useful, the inherent risk of biases poses a significant concern. Inaccurate predictions resulting from the algorithm and data biases in AI models for medical devices can be life-threatening, disrupting accurate diagnoses and treatments. The literature survey highlights multiple concerns related to the monitoring and regulation of AI-enabled medical devices, encompassing issues such as data diversity, data use, cybersecurity, stakeholder engagement, and human–AI interaction. Given the evolving nature of AI/ML-MD, a lifecycle approach is consistently favored for addressing these concerns. In adherence to stringent regulations, it is imperative to ensure that the benefits of AI/ML-MD are promptly accessible to those in need while maintaining the quality of predictions. Recognizing this need, we propose the establishment of a sustainable materiovigilance ecosystem for AI/ML-MD.
Materiovigilance is the “coordinated system of identification, collection, reporting, and analysis of any untoward occurrences associated with the use of medical devices and protection of patient's health by preventing its recurrences” [41]. Continuous monitoring of these devices allows for the identification of perilous ones, enabling their removal from the market. Beyond withdrawal, companies can actively address and eliminate faults, contributing to the overall improvement in the quality of medical devices. This proactive approach ensures the safety of patients and consumers by mitigating potential risks associated with medical equipment [42]. By adopting a “materiovigilance” approach tailored to AI/ML-MD and incorporating sustainability principles advocated by H5.0, healthcare systems can proactively identify and address potential issues, ensure the responsible deployment of AI/ML technologies, and contribute to the overall advancement of healthcare delivery in a sustainable manner.
The scoping review reveals a myriad of challenges linked to the regulation of AI–ML medical devices, echoing various pillars of sustainability. As healthcare systems globally aim for sustainable transformations, we advocate for the integration of sustainability principles into materiovigilance, ensuring the establishment of a sustainable ecosystem. By incorporating a comprehensive perspective that considers societal, economic, and environmental impacts, we can foster a sustainable ecosystem for AI–ML MD that prioritizes both innovation and the well-being of individuals and the broader community.
At the core, it shows a sustainable materiovigilance ecosystem interconnected by the three pillars of sustainability. Environmental sustainability is conceptualized in terms of biodiversity conservation, social needs, natural resources' regenerative capacity, limiting the usage of nonrenewable energy sources, and waste generation, reuse, and recycling. Materiovigilance of AI/ML-MD shall consider minimizing the ecological footprint associated with device production, usage, and disposal, aligning with environmentally friendly practices. As advocated by Kaladharan et al. [43] in the context of digital health systems, AI/ML-MD can also challenge environmental sustainability, as it consumes enormous nonrenewable energy during the production of computing hardware, cooling down the servers, etc., and generates enormous amount of e-waste. Optimizing data centers, utilizing renewable energy sources, and promoting energy-efficient device design are crucial steps in ensuring environmental sustainability. Conducting holistic life cycle assessments of the MDs, from material sourcing to disposal, helps identify environmental hotspots and prioritize improvement areas. Regulations should aim to protect patient safety while minimizing unnecessary burdens on businesses, ensuring economic sustainability. Cost-benefit analyses can help ensure regulations achieve their goals without hindering innovation or creating excessive economic hardship. Exploring risk-based approaches can prioritize resources toward high-risk devices or materials, focusing on areas with the most significant potential impact on patient safety. Transparency in data sharing and collaboration between regulators, researchers, and industry can inform efficient and targeted regulations based on accurate evidence. Further, harmonizing the regulations across the globe can avoid unnecessary economic burdens to bring AI/ML-MD onto the market. Social sustainability, focusing on human welfare, becomes particularly critical in the context of AI/ML-MD, where the experimental nature of these technologies requires acknowledgment. It is imperative to establish ethical safeguards to address concerns such as the transparency of black-box algorithms, issues of explainability, biases in AI models stemming from unequal representation in training data, and potential infringements on citizens' rights, including privacy concerns with facial and emotional recognition systems. Restricting the conventional AI models and bringing in explainable AI (XAI) models could be a feasible approach for materiovigilance. XAI refers to those algorithms that produce outputs along with the reasons and details so that they are comprehensible and understandable for humans. The goals of XAI include improving trustworthiness, transferability, causality, fairness, informativeness, confidence, accessibility, interactivity, and privacy awareness among domain experts, product owners, users affected by model decisions, and regulatory bodies. The transparent models will facilitate troubleshooting and prevent recurrences of incorrect predictions. Further, it enables tracking the data use and finding privacy breaches, if any [7,44,45]. From a materiovigilance perspective, prioritizing data privacy, security, and stakeholder engagement becomes crucial to ensure the social sustainability of AI technologies [46]. Figure 5 represents the conceptualized sustainable ecosystem for the materiovigilance of AI/ML-MD.
5 Contributions, Limitations, and Future Implications
The literature review has contributed by consolidating existing knowledge on the regulatory landscape for AI/ML-MD and associated challenges. The proposed sustainable materiovigilance ecosystem integrates social, economic, and environmental considerations into the regulation of AI/ML-MD. By emphasizing data privacy, stakeholder engagement, cost optimization, and eco-friendly practices, the framework aims to ensure the long-term viability and ethical use of these technologies. The suggested approach aligns with the broader goals of healthcare sustainability. While the present study offers a broad overview, a more nuanced understanding of regulatory landscapes can be achieved by examining how cultural, legal, and healthcare system variations impact the adoption and implementation of regulations. By implementing such a framework, policymakers can ensure that AI/ML-MD regulation is not only technically robust but also socially responsible and environmentally sustainable. The comprehensive presentation of regulatory challenges associated with AI/ML-MD provides policymakers with a nuanced understanding of the complex landscape surrounding these technologies. By examining the various obstacles and constraints, policymakers can develop informed strategies and potential solutions to address regulatory gaps and ensure the safe and effective integration of AI/ML-MD into healthcare systems. By adopting a sustainability perspective, regulatory scientists can contribute to the development of holistic and resilient regulatory frameworks that prioritize the long-term well-being of both patients and the planet. Despite the significant contributions of the study, several limitations must be acknowledged. The study is subject to the availability and quality of existing literature. Additionally, the rapidly evolving nature of AI/ML technologies may result in a time lag in reflecting the latest regulatory developments. The focus on regulations might overlook specific regional nuances or emerging trends that have not yet gained widespread attention.
6 Conclusion
The rapid advancement of AI/ML-MD presents significant regulatory challenges that must be addressed to ensure their safe and effective integration into healthcare. This study, through a PRISMA-guided scoping review, has meticulously analyzed 18 high-quality studies to identify key regulatory issues. The findings, organized into themes such as adaptive AI/ML, usability and stakeholder engagement, data diversity and use, health disparities, synthetic data usage, regulatory considerations, medicolegal issues, and cybersecurity threats, highlight the multifaceted nature of these challenges. Addressing these issues is crucial for the continued innovation and implementation of AI/ML-MD. Future efforts should focus on developing comprehensive regulatory frameworks that can adapt to technological advancements while ensuring patient safety and promoting equitable healthcare outcomes.
Acknowledgment
S.K., the first author, gratefully acknowledges the Ph.D. fellowship awarded by the University Grants Commission (UGC) of India. The UGC had no role in the design and execution of the study or the decision to submit this manuscript for publication.
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data Availability Statement
No data, models, or code were generated or used for this paper.