Artificial intelligence (AI) is rapidly transforming how research is conducted across various fields. From healthcare to education, AI tools are being used to analyze data, make predictions, and even generate content. This surge in AI-driven research brings with it a host of ethical considerations that traditional oversight mechanisms, like Institutional Review Boards (IRBs), are grappling to address.
Both institutional and independent IRBs have long served as the gatekeepers of ethical research, primarily focusing on studies involving direct human interaction. Their protocols are well-suited for clinical trials or behavioral studies where participants are clearly defined, and risks can be assessed through established frameworks. However, AI research often operates differently. It may involve analyzing large datasets, using machine learning algorithms, or deploying AI systems in real-world settings without direct human subjects in the traditional sense.
This shift presents challenges for IRBs. For instance, AI systems can inadvertently perpetuate biases present in their training data, leading to unfair or discriminatory outcomes. A notable example is the use of facial recognition technology, which has been shown to have higher error rates for individuals with darker skin tones, raising concerns about racial bias in AI applications (Buolamwini & Gebru, 2018).
Moreover, the use of AI in research often involves handling vast amounts of personal data, sometimes without explicit consent from individuals. This raises questions about privacy and data protection that existing IRB frameworks may not fully encompass. As AI systems become more complex and autonomous, determining accountability for their actions becomes increasingly difficult.
Recognizing these challenges, there is a growing consensus that IRBs need to evolve to effectively oversee AI research. This includes incorporating expertise in AI and data science into review processes, developing new ethical guidelines tailored to AI technologies, and fostering interdisciplinary collaboration to understand the broader societal impacts of AI.
In the following sections, we will explore the specific ethical challenges posed by AI research, examine the limitations of current IRB models, and discuss strategies for adapting ethical oversight to keep pace with technological advancements without compromising core ethical principles.
The Unique Ethical Challenges Posed by AI Research

AI is transforming research across disciplines, offering powerful tools for data analysis, prediction, and automation. However, this rapid advancement introduces ethical challenges that traditional oversight mechanisms, like IRBs, are not fully equipped to handle.
- The “Black Box” Problem: Many AI systems, especially those utilizing deep learning, operate as “black boxes,” producing outputs without transparent reasoning. This lack of explainability poses ethical concerns, particularly in fields like healthcare, where understanding the rationale behind a diagnosis or treatment recommendation is crucial. For instance, a deep learning model developed at Mount Sinai Hospital could predict psychiatric disorders without revealing how it reached its conclusions, leaving clinicians unable to interpret or trust its recommendations (Wadden, 2023).
- Algorithmic Bias and Fairness: AI systems learn from data, and if that data contains biases, the AI can perpetuate or even amplify them. This is particularly concerning in applications like hiring, lending, or law enforcement. For example, facial recognition technologies have been shown to have higher error rates for individuals with darker skin tones, leading to potential discrimination (Buolamwini & Gebru, 2018). Addressing these biases requires expertise and care in handling the process of data sources and algorithm design.
- Privacy and Data Protection: AI research often involves large datasets, some of which may contain personal or sensitive information. Ensuring the privacy and confidentiality of individuals represented in these datasets is a significant ethical challenge. Moreover, the potential for re-identification of anonymized data raises concerns about informed consent and data security (Murdoch, 2021).
- Informed Consent in AI Research: Traditional informed consent processes may not be adequate for AI research, especially when data is repurposed from existing sources or when AI systems evolve. Participants may not fully understand how their data will be used or the implications of AI-driven analyses. This necessitates a clear path in the re-evaluation of consent procedures to ensure they are meaningful and comprehensive in the context of AI (Wadden, 2023).
- Accountability and Responsibility: Determining who is accountable for decisions made by AI systems is complex. If an AI system causes harm or makes a faulty decision, it is challenging to assign responsibility, whether to the developers, users, or the institution deploying the AI. This ambiguity complicates ethical oversight and legal liability (Resnik, 2024).
- Dual-Use Concerns: AI technologies developed for beneficial purposes can be repurposed for harmful applications. For example, AI tools designed for medical diagnostics could be adapted for surveillance or military use. Researchers and human subject review boards must consider the potential for such dual-use scenarios and implement safeguards to prevent misuse (Resnik, 2024).
Why Traditional IRB Models Are Struggling to Keep Up

Both institutional and independent IRBs have long been the cornerstone of ethical oversight in research involving human subjects. Their frameworks are well-suited for studies with direct human interaction, such as clinical trials or behavioral research. However, the advent of Artificial Intelligence (AI) in research introduces complexities that challenge the traditional IRB model.
Ambiguity in Defining Human Subjects
Traditional IRB protocols are designed to protect identifiable human subjects. However, AI research often utilizes large datasets, sometimes anonymized, making it difficult to determine when IRB oversight is necessary. The U.S. Department of Health and Human Services’ Office for Human Research Protections (OHRP) notes that AI research using secondary data may fall outside the purview of current regulations, potentially leaving participants unprotected from unforeseen harm (Office for Human Research Protections, 2022).
This ambiguity is further complicated by the evolving nature of data identifiability. With advancements in data analytics, information once considered non-identifiable can now be re-identified, raising concerns about privacy and consent. The current regulatory frameworks may not offer direct management and even adequately address these dynamics, necessitating a re-evaluation of what constitutes human subjects in the context of AI research (Office for Human Research Protections, 2022).
Lack of Technical Expertise
AI research involves complex algorithms and data processing techniques that may be beyond the expertise of traditional IRB members. This knowledge gap can hinder the effective evaluation of AI research protocols. As highlighted by Jackson (2022), many IRBs lack a comprehensive, bespoke approach and members with sufficient understanding of AI technologies, leading to challenges in assessing the ethical implications of such research. To bridge this gap, some institutions are considering the inclusion of AI experts in IRB panels or establishing specialized committees to oversee AI research. Such measures aim to ensure that ethical reviews are informed by technical insights, thereby enhancing the protection of research participants and the integrity of the research process (Jackson, 2022).
Rapid Technological Advancements
The pace of AI development often outstrips the ability of IRBs to adapt their review processes. Traditional IRB procedures may not be agile enough to keep up with the iterative nature of AI research, where models are continuously updated and refined. This lag can result in outdated ethical assessments that fail to address current risks (Silverman, 2024). Moreover, the dynamic nature of AI systems means that their behavior can change over time, potentially introducing new ethical concerns post-approval. This necessitates ongoing oversight and flexible review mechanisms that can accommodate the evolving risks associated with AI research (Silverman, 2024).
Challenges in Assessing Long-Term Impacts
AI systems can have far-reaching and long-term societal impacts that are difficult to predict at the research stage. Traditional IRBs are typically focused on immediate risks to participants, potentially overlooking broader consequences such as algorithmic bias or misuse of AI applications (Office for Human Research Protections, 2022). For instance, an AI tool developed for benign purposes could be repurposed in ways that cause harm, a phenomenon known as dual-use. Human subject IRBs may not be equipped to evaluate these potential future applications, highlighting the need for ethical frameworks that consider both immediate and long-term implications of AI research (OHRP, 2022).
Inadequate Regulatory Frameworks
Current regulations governing human subjects research may not adequately address the unique challenges posed by AI. For instance, the Common Rule’s definitions and exemptions may not align with the realities of AI research, leading to gaps in oversight (Office for Human Research Protections, 2022). This regulatory inadequacy is particularly evident in cases where AI research involves the use of large datasets without direct interaction with individuals. Such studies may be exempt from IRB review, yet still pose significant ethical risks, underscoring the need for updated regulations that reflect the complexities of AI research (Office for Human Research Protections, 2022).
Emergence of Alternative Oversight Mechanisms
In response to these challenges, some organizations are establishing specialized committees, such as Algorithm Review Boards (ARBs), to oversee AI research. These boards aim to complement traditional IRBs by providing technical expertise and focusing on the ethical implications specific to AI technologies (Hadley et al., 2024). ARBs can offer a more nuanced understanding of AI systems, enabling more thorough ethical evaluations. By working alongside IRBs, these specialized committees eliminate obstacles and can help ensure that AI research is conducted responsibly, with appropriate safeguards in place to protect participants and society at large (Hadley et al., 2024).
Creating Agile Ethical Review Protocols

In the fast-paced world of AI research, traditional IRB processes often struggle to keep up. The static nature of conventional ethical reviews doesn’t align well with the dynamic and iterative development cycles of AI technologies. To address this mismatch, there’s a growing need to develop agile ethical review protocols that can adapt to the unique challenges posed by AI research.
Embracing Iterative Review Processes
AI systems are frequently updated and refined, leading to changes that can impact ethical considerations. Traditional IRB reviews, typically offered and conducted at a single point in time, may not offer direct management and adequately capture these evolving risks. Implementing iterative review processes allows for continuous oversight, ensuring that ethical evaluations remain relevant throughout the research lifecycle. This approach aligns with recommendations from the Office for Human Research Protections, which emphasizes the importance of ongoing ethical assessment in AI research (OHRP, 2022). By adopting iterative reviews, IRBs can offer a clear path to monitoring developments such as changes in data sources, algorithmic modifications, or shifts in research objectives. This proactive stance enables the timely identification and mitigation of emerging ethical issues, fostering collaboration and responsible AI research practices.
Integrating AI-Specific Ethical Guidelines
The unique aspects of AI research necessitate the development of tailored ethical guidelines. Traditional frameworks may not sufficiently address concerns like algorithmic bias, data privacy, and the interpretability of AI models. Creating AI-specific ethical guidelines provides IRBs with the tools to evaluate these complex issues effectively. The MRCT Center’s initiative to develop resources for assessing AI research protocols exemplifies efforts to equip IRBs with specialized guidance (MRCT Center, 2024). These guidelines can serve as a foundation for IRBs to assess the ethical implications of AI research comprehensively. By incorporating considerations unique to AI, such as the potential for unintended consequences or the challenges of ensuring informed consent, IRBs can enhance their oversight capabilities and promote ethical integrity in AI studies.
Fostering Interdisciplinary Collaboration
The multifaceted nature of AI research calls for interdisciplinary collaboration in ethical reviews. Involving experts from fields such as data science, ethics, law, and social sciences can provide diverse perspectives, enriching the evaluation process. This collaborative approach ensures that ethical considerations encompass technical, legal, and societal dimensions, leading to more comprehensive assessments. Institutions like the MRCT Center have recognized the value of such collaboration, convening task forces that include representatives from academia, industry, and regulatory bodies to offer direct management and address ethical challenges in AI research (MRCT Center, 2024). By fostering interdisciplinary engagement, IRBs can better navigate the complexities of AI studies and promote responsible research practices.
Enhancing Transparency and Accountability
Transparency and accountability are fundamental to ethical research. In the context of AI, this involves clear documentation of data sources, algorithmic decision-making processes, and potential risks. Independent IRBs can play a pivotal role in enforcing transparency by requiring detailed disclosures in research proposals and fostering collaboration by promoting open communication between researchers and participants. Moreover, establishing mechanisms for accountability, such as regular audits and post-approval monitoring, can help ensure that researchers adhere to ethical standards throughout the study. These practices not only protect research participants but also build public trust in AI research endeavors.
Policy Alignment and Regulatory Synergy

As AI continues to permeate various facets of research, ensuring that IRBs remain aligned with evolving policies and regulations becomes paramount. The rapid advancement of AI technologies presents unique challenges that traditional regulatory frameworks may not adequately address, necessitating a concerted effort to harmonize ethical oversight with contemporary technological developments.
Regulatory Misalignment
The existing regulatory landscape often struggles to keep pace with the swift evolution of AI technologies. This misalignment can lead to gaps in oversight, where AI-driven research operates in areas not clearly defined by current regulations. For instance, the use of de-identified data in AI research may bypass traditional IRB scrutiny, yet still pose significant ethical concerns regarding privacy and consent (OHRP, 2022). To address these challenges, there is a pressing need for human subject review boards to revisit and revise existing policies, ensuring they encompass the dynamics of AI research. This includes clarifying definitions of human subjects in the context of AI, establishing guidelines for data use, and delineating the responsibilities of researchers and institutions in safeguarding participant rights.
Fostering Interagency Collaboration
Achieving regulatory synergy requires robust collaboration among various stakeholders, including federal agencies, academic institutions, and industry partners. Such collaboration can facilitate the development of comprehensive guidelines that reflect the multifaceted nature of AI research. The MRCT Center’s initiative to convene a task force comprising representatives from academia, industry, and human subject regulatory bodies exemplifies efforts to create cohesive frameworks for ethical AI research (MRCT Center, 2024). By fostering open communication and shared understanding among stakeholders, these collaborative efforts can lead to the establishment of standardized practices that ensure ethical integrity across diverse research settings. This collective approach is essential in navigating the complex ethical terrain introduced by AI technologies.
Implementing Adaptive Regulatory Mechanisms
Given the dynamic nature of AI, static regulatory frameworks may prove insufficient. There is a growing recognition of the need for adaptive regulatory mechanisms that can respond to the evolving landscape of AI research. This includes the adoption of regulatory sandboxes, controlled environments where new technologies can be tested under regulatory supervision, to assess the implications of AI applications before widespread deployment (Díaz-Rodríguez et al., 2023). Such adaptive approaches enable regulators to identify potential risks and ethical concerns in real time, allowing for timely interventions and policy adjustments. This proactive stance is crucial in maintaining ethical standards amidst the rapid progression of AI technologies.
Enhancing Transparency and Accountability
Transparency and accountability are foundational to ethical research practices. In the context of AI, this entails clear documentation of data sources, algorithmic decision-making processes, and potential biases. Regulatory frameworks must mandate comprehensive reporting and auditing mechanisms to ensure that AI research adheres to established ethical standards (Mokander & Floridi, 2021). By enforcing stringent transparency requirements, regulators can eliminate obstacles and hold researchers and institutions accountable for the ethical implications of their AI applications. This not only safeguards participant rights but also fosters public trust in AI-driven research endeavors.
Promoting International Regulatory Harmonization
AI research often transcends national boundaries, necessitating international cooperation in establishing ethical standards. Disparities in regulatory approaches across countries can lead to inconsistencies in ethical oversight, potentially compromising participant protections. Efforts to harmonize regulations, such as aligning with the European Union’s Ethics Guidelines for Trustworthy AI, are instrumental in creating a cohesive global framework for ethical AI research (Stix, 2021). International regulatory harmonization ensures that ethical standards are uniformly applied, regardless of geographic location, thereby upholding the rights and welfare of research participants worldwide. Such alignment is vital in fostering responsible and ethical AI research on a global scale.
Transparency and Public Trust in AI Research Ethics

In AI research, transparency isn’t just a buzzword, it’s a foundational pillar that underpins public trust. As AI systems become increasingly integrated into various aspects of society, from healthcare to criminal justice, the ethical considerations surrounding their development and deployment have come under intense scrutiny. Ensuring that AI research is conducted transparently is crucial for maintaining public confidence and ensuring that these technologies serve the public good.
Transparency in AI research involves clear documentation of methodologies, data sources, and decision-making processes. This openness allows for independent verification and fosters a culture of accountability. For instance, the European Centre for Algorithmic Transparency (ECAT) was established to provide scientific and technical expertise to support the enforcement of the Digital Services Act, emphasizing the importance of transparency in algorithmic systems. Such initiatives highlight the growing recognition of transparency as a means to bolster public trust.
However, achieving transparency is not without challenges. AI systems, particularly those utilizing complex machine learning algorithms, often operate as “black boxes,” making it difficult to understand how specific decisions are made. This opacity can lead to skepticism and resistance from the public. A study published in AI and Ethics discusses the global landscape of transparency standards and emphasizes the need for adaptable, sector-specific regulatory structures to keep pace with AI’s rapid technological advancement.
Moreover, transparency must be balanced with other ethical considerations, such as privacy and security. Disclosing too much information about AI systems can inadvertently expose sensitive data or proprietary algorithms. Therefore, establishing clear guidelines on what information should be made public is essential. The IEEE P7001 standard, for example, provides a framework for transparency in autonomous systems, aiming to strike a balance between openness and confidentiality.
Public engagement is another critical component of building trust in AI research. Involving diverse stakeholders, including ethicists, sociologists, and representatives from affected communities, can provide valuable perspectives and help identify potential ethical pitfalls early in the research process. The Swiss Digital Initiative’s Digital Trust Label is an example of efforts to certify the trustworthiness of digital services based on criteria such as security, data protection, and fair user interaction.
Educational initiatives also play a vital role in fostering collaboration and public trust. By demystifying AI technologies and explaining their capabilities and limitations, researchers can alleviate fears and misconceptions. A clear path and transparency in communication ensure that the public is informed about how AI systems function and the measures in place to safeguard ethical standards.
Conclusion

As artificial intelligence (AI) reshapes the research landscape, the ethical oversight that once worked for traditional studies is no longer enough. Institutional Review Boards (IRBs) are now faced with reviewing increasingly complex AI-related research that raises questions about fairness, privacy, and accountability. The speed and scale of AI development demand a more responsive, informed approach. That’s where we come in.
At BeyondBound IRB, we understand what’s at stake. We offer a comprehensive, bespoke approach to IRB review that’s built to meet the demands of today’s most innovative researchers. Whether your study involves AI or another emerging technology, our team brings expertise and care in handling the process, every step, every detail, with you in mind. Our goal is simple: no roadblocks, just support. We focus on building a clear path to fast, confident approval by eliminating unnecessary hurdles and empowering researchers with the clarity they need.
We don’t just review protocols; we work alongside you. Through direct engagement, we provide comprehensive support that helps you move forward without getting stuck in bureaucratic loops. Our process is designed to be efficient and stress-free, with transparent pricing and customized pricing options tailored to your study’s specific needs. We also tap into our affiliated network of experts when needed, so your research benefits from broader perspectives without slowing down.
If you’re new to IRB or working with AI-related protocols for the first time, our IRB Heart program is built for you. We offer focused, practical IRB training that’s designed to foster collaboration and build confidence in your submissions. Whether you’re a student, PI, or research coordinator, IRB Heart provides the tools and knowledge to navigate ethical reviews with ease.
Ready to move forward without compromise? Partner with Beyond Bound IRB and IRB Heart, your research deserves nothing less.
References
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., López de Prado, M., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation. arXiv preprint arXiv:2305.02231.
European Centre for Algorithmic Transparency. (2024). European Centre for Algorithmic Transparency.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People, An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
Hadley, E., Blatecky, A., & Comfort, M. (2024). Investigating algorithm review boards for organizational responsible artificial intelligence governance. AI and Ethics, 5, 2485–2495. https://doi.org/10.1007/s43681-024-00574-8
Jackson, B. R. (2022). IRBs are reviewing artificial intelligence research, outside expertise needed. Relias Media. https://www.reliasmedia.com/articles/148976-irbs-are-reviewing-artificial-intelligence-research-outside-expertise-needed
Lund, B., Orhan, Z., Mannuru, N. R., Bevara, R. V. K., Porter, B., & Vinaih, M. K. (2025). Standards, frameworks, and legislation for artificial intelligence (AI) transparency. AI and Ethics. https://doi.org/10.1007/s43681-025-00661-4
Mann, S. P., Seah, J. J., Latham, S. R., Savulescu, J., Aboy, M., & Earp, B. D. (2025). Development of Application-Specific Large Language Models to Facilitate Research Ethics Review. arXiv preprint arXiv:2501.10741.
Mokander, J., & Floridi, L. (2021). Ethics-Based Auditing to Develop Trustworthy AI. arXiv preprint arXiv:2105.00002.
MRCT Center. (2024). Artificial Intelligence (AI) and Ethical Research. Retrieved from https://mrctcenter.org/project/ethics-ai/
Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 122. https://doi.org/10.1186/s12910-021-00687-3
Office for Human Research Protections. (2022). IRB Considerations on the Use of Artificial Intelligence in Human Subjects Research. U.S. Department of Health and Human Services. Retrieved from https://www.hhs.gov/ohrp/sachrp-committee/recommendations/irb-considerations-use-artificial-intelligence-human-subjects-research/index.html
Resnik, D. B. (2024). The ethics of using artificial intelligence in scientific research: New challenges and recommendations. AI and Ethics, 4(1), 1–9. https://doi.org/10.1007/s43681-024-00493-8
Silverman, B. C. (2024). IRB review of research involving AI. National Institutes of Health. https://irbo.nih.gov/confluence/download/attachments/45646144/Silverman-AI_Talk_for_NIH_OHSRP_4.4.2024_508C.pdf
Stix, C. (2021). Actionable Principles for Artificial Intelligence Policy: Three Pathways. arXiv preprint arXiv:2102.12406.
Swiss Digital Initiative. (2024). Swiss Digital Initiative.
Wadden, J. (2023). Understanding artificial intelligence with the IRB: Ethical considerations and advice for responsible research in the AI era. Teachers College, Columbia University. https://www.tc.columbia.edu/institutional-review-board/irb-blog/2024/understanding-artificial-intelligence-with-the-irb-ethics-and-advice/