,

IRB Oversight in the Age of Rapid Technological Advancements in Research

min read

The role of Institutional Review Boards (IRBs) has never been more critical than in today’s fast-evolving research landscape. Initially established to protect human subjects from unethical research practices, IRBs have long been the gatekeepers of ethical oversight in clinical trials, social science studies, and biomedical research. However, with the rapid advancement of technology, research methodologies have drastically shifted, bringing both new opportunities and unprecedented ethical challenges (Resnik, 2018).

From artificial intelligence-driven data analysis to wearable health monitoring devices, researchers today have access to tools that were unimaginable just a few decades ago. These technologies promise breakthroughs in medical treatment, behavioral research, and personalized healthcare. However, they also raise key ethical questions that IRBs must now grapple with; questions about data privacy, informed consent in digital environments, and the potential for algorithmic biases in research findings (Fiske et al., 2019). Traditional ethical guidelines, while still relevant, often struggle to keep pace with innovations that blur the lines between research, surveillance, and commercial technology.

For instance, the collection and use of big data in research can be highly beneficial, allowing for large-scale analyses that improve health outcomes. However, many of these datasets are compiled from sources that were never intended for research, such as social media activity, fitness trackers, and online search behavior. This creates a gray area regarding informed consent and data ownership (Metcalf & Crawford, 2016). IRBs must now determine how to apply ethical principles such as autonomy and beneficence to research that relies on passive data collection or AI-driven analytics.

Additionally, the rise of remote and digital research methods such as telemedicine studies, mobile health interventions, and online behavioral experiments, has challenged traditional notions of participant engagement. Unlike in-person studies, where researchers can directly interact with participants to explain risks and ensure comprehension, digital research often relies on lengthy consent forms or automated agreements that may not adequately communicate risks (Nebeker et al., 2017). This shift demands a more dynamic and responsive approach to oversight, one that maintains ethical integrity while accommodating technological progress.

The fundamental mission of IRBs to protect research participants from harm remains unchanged. However, the strategies they must use to fulfill this mission must evolve. The ethical considerations of the past, which largely centered on preventing physical harm and coercion, must now expand to address concerns like digital security breaches, algorithmic discrimination, and the unintended consequences of predictive analytics (Mittelstadt et al., 2016).

In this blog, let’s explore how IRB oversight and direct management are adapting (or struggling to adapt) to the challenges presented by rapid technological advancement. We will examine emerging ethical dilemmas, regulatory gaps, and best practices for ensuring that research remains both innovative and ethically sound. In doing so, we aim to shed light on the delicate balance IRBs must strike between fostering scientific progress and safeguarding the rights and well-being of research participants.

I. The Changing Research Framework

The way research is conducted today looks vastly different from just a decade ago. Emerging technologies have reshaped methodologies, introduced new data collection tools, and expanded the scale and scope of research in ways that were once unimaginable. While these advancements have opened doors for key findings, they have also presented ethical and regulatory challenges that IRBs must now navigate. From AI-driven analytics to decentralized clinical trials, researchers and oversight boards alike are grappling with how to balance innovation with the fundamental responsibility of protecting human participants.

Growth of Digital Tools in Research (AI, Big Data, Wearable Tech)

One of the most significant changes in research today is the rise of digital tools that enable large-scale data collection and analysis. Artificial intelligence (AI) and big data analytics allow researchers to process massive datasets, uncover patterns, and make predictions that were previously impossible with traditional statistical methods (Floridi & Taddeo, 2016). These tools are particularly transformative in fields such as healthcare, social sciences, and behavioral research, where they are used to detect disease patterns, predict mental health trends, and even assess human decision-making.

Wearable technology such as smartwatches, fitness trackers, and biosensors has also become a popular tool for researchers looking to eliminate obstacles as they gather continuous, real-time data on individuals’ physical activity, heart rate, sleep patterns, and more. These devices have immense potential for advancing medical research, particularly in chronic disease management and preventive health (Piwek et al., 2016). However, they also raise ethical concerns about data privacy, ownership, and consent, especially when users may not fully understand how their information is being used.

Independent IRBs now face the challenge of assessing research proposals that use these technologies, ensuring that participants’ data remains protected and that consent processes are transparent and meaningful. Unlike traditional studies, where participants actively agree to participate in controlled experiments, digital research often involves passive data collection, where individuals may not even be aware that their information is being analyzed. This shift demands a reevaluation of how IRBs define informed consent and participant autonomy in the digital age.

Increased Reliance on Remote and Virtual Research Methods

Another major shift in the research framework is the growing reliance on remote and virtual research methods. The COVID-19 pandemic accelerated the adoption of online surveys, video-based interviews, and remote clinical trials, proving that high-quality research could be conducted outside of traditional lab or hospital settings (O’Connor et al., 2021). This transition has made research more accessible to diverse populations, including individuals in rural areas or those with mobility limitations who may have previously been excluded from in-person studies.

However, conducting research remotely presents a new set of ethical and practical challenges. For example, how can you ensure that participants fully understand the risks and benefits of a study when consent is obtained online rather than face-to-face? How can you verify that participants are providing truthful responses when they are not being observed in a controlled environment? And in the case of telehealth studies, how do researchers handle emergency situations when a participant experiences distress or adverse effects miles away from the research team?

Human subject review board must now consider these factors when reviewing studies that utilize remote methodologies. They must assess whether researchers have established adequate measures to protect participants, such as secure data transmission, appropriate follow-up protocols, and contingency plans for handling participant distress. As virtual research becomes more commonplace, IRBs with top tier services will need to develop guidelines that address these emerging risks while still allowing for the benefits of increased accessibility and inclusivity in research.

Expansion of Interdisciplinary Studies Involving Tech and Human Subjects

Research today is no longer confined to traditional disciplinary boundaries. Studies are increasingly incorporating elements of technology, ethics, and human behavior in ways that challenge conventional IRB frameworks. Fields such as neuroscience, artificial intelligence, and bioengineering are converging, leading to studies that explore human-technology interactions, brain-computer interfaces, and even genetic modifications (Yuste et al., 2017). These interdisciplinary projects push the limits of what IRBs are accustomed to reviewing, as they often involve ethical considerations that do not fit neatly into existing regulatory categories.

For example, neurotechnology research, which involves devices that can read or alter brain activity, raises profound questions about autonomy and consent. If a brain-computer interface is used to help a paralyzed individual regain movement, who owns the data generated by that device? Can that data be used for further research without the participant’s explicit consent? And what are the potential risks of unintended cognitive or psychological side effects? These are the kinds of questions that IRBs must now tackle as technology continues to advance rapidly.

Similarly, AI-driven psychological research presents unique challenges. Machine learning models are being used to analyze human emotions, predict mental health outcomes, and even generate personalized behavioral interventions. However, these algorithms are not always transparent, and they can reflect biases that may lead to harmful or misleading conclusions (Mehrabi et al., 2021). IRBs must therefore consider how to assess algorithmic transparency and fairness when reviewing studies that rely on AI to analyze human behavior.

The growing overlap between technology and human research underscores the need for the direct management of IRBs to include a more interdisciplinary approach. Traditional biomedical and social science ethics may no longer be sufficient to address the challenges of these emerging fields. Moving forward, IRBs may need to collaborate more closely with ethicists, technologists, and data privacy experts to ensure that research remains both innovative and ethically responsible.

II. Ethical Challenges Posed by New Technologies

The rapid advancement of technology has fundamentally reshaped how research involving human subjects is conducted. While these innovations offer unprecedented opportunities to improve health outcomes, social understanding, and human capabilities, they also introduce ethical dynamics that IRBs must now confront. Many of these challenges stem from the increased collection and analysis of personal data, the evolving nature of informed consent, and the unintended biases embedded in algorithmic decision-making. As researchers integrate artificial intelligence, big data analytics, and biotechnology into their studies, ensuring ethical oversight is more difficult and more necessary than ever.

Privacy and Data Security: Handling Vast Amounts of Sensitive Data

One of the most pressing ethical concerns in tech-driven research is the handling of sensitive personal data. With the rise of wearable health monitors, smartphone-based behavioral studies, and AI-powered medical diagnostics, researchers now have access to immense datasets that contain highly personal information; often without participants realizing the full extent of their data’s use (Nissenbaum, 2019).

For example, studies using fitness trackers or smartphone apps often collect GPS locations, sleep patterns, and heart rate data, which can reveal more about a participant than they initially intended to share. Additionally, large-scale genomic studies store DNA data that could, if misused or breached, be linked back to individuals or even their family members. These ethical dilemmas challenge traditional IRB frameworks, which were developed in an era when research data was primarily collected through direct interactions rather than through passive, continuous digital tracking.

Furthermore, researchers must ensure compliance with evolving data protection regulations such as the GDPR in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States. These policies outline strict guidelines for data handling, yet they do not always cover emerging issues, such as AI-driven re-identification of de-identified data. IRB for researchers must now assess whether researchers have the appropriate safeguards in place, including data encryption, secure storage, and clear policies on third-party data sharing (Ohm, 2010).

Informed Consent in Digital Studies

Informed consent is a cornerstone of ethical research, ensuring that participants fully understand the risks and benefits of their involvement. However, in the digital age, obtaining truly informed consent has become more complicated. Many research studies now rely on digital platforms for participant recruitment and data collection, often using lengthy, jargon-filled consent forms that few participants thoroughly read (Berg et al., 2021).

This issue is especially pronounced in AI-driven research, where the dynamics of machine learning algorithms make it difficult for both researchers and participants to predict how data will be used in the future. Consider, for example, a study that uses an AI model to analyze social media posts for mental health predictions. Even if participants consent to the study, they may not fully grasp how their data will be analyzed, stored, or repurposed for future research.

To address these challenges, some researchers are experimenting with interactive consent models, such as video-based explanations or chatbot-assisted consent processes that allow participants to ask questions in real time (Nebeker et al., 2019). IRBs must now offer comprehensive support by evaluating whether these alternative consent approaches are sufficient to ensure that participants genuinely understand what they are agreeing to, especially in studies involving emerging technologies, where risks are not always immediately clear.

Algorithmic Bias and Fairness: The Ethical Implications of AI-Driven Research

AI and machine learning are becoming integral tools in research, from predicting disease outbreaks to identifying patterns in human behavior. However, these technologies are not free from bias. If an AI model is trained on data that lacks diversity or reflects societal inequalities, it can perpetuate those biases in its conclusions (Mehrabi et al., 2021).

For example, medical AI systems that are trained primarily on data from white male patients may be less accurate when diagnosing conditions in women or people of color. Similarly, predictive policing algorithms have been shown to disproportionately flag individuals from minority communities, leading to ethical concerns about fairness and discrimination (Obermeyer et al., 2019). These biases are not always intentional, but they can have real-world consequences, particularly when research findings are used to inform public policy or medical treatment decisions.

Both institutional and independent IRBs now face the challenge of assessing whether AI-driven studies account for these biases. This includes scrutinizing how datasets are collected, whether diverse populations are represented, and whether researchers have built mechanisms to detect and mitigate bias in their algorithms. In addition to protecting individual participants, IRBs must also consider the broader societal implications of biased research findings—especially as AI continues to influence critical areas like healthcare, criminal justice, and hiring practices.

Human Augmentation and Biotech: Navigating Ethical Concerns in Neurotechnology and Genetic Studies

Beyond AI and big data, advancements in biotechnology and neurotechnology present entirely new ethical dilemmas for IRBs. Brain-computer interfaces (BCIs), for instance, allow individuals to control devices using their neural activity, offering a key potential for people with disabilities. However, they also raise questions about cognitive privacy, identity, and autonomy (Yuste et al., 2017). If a research study involves implanting a device that alters brain function, what happens if the participant later wants it removed? And who owns the data generated by a participant’s brain signals—the individual, the researcher, or the company producing the device?

Similarly, genetic research is progressing at an extraordinary pace, with CRISPR and other gene-editing technologies enabling modifications that could one day prevent hereditary diseases. While these breakthroughs have the potential to revolutionize medicine, they also raise ethical concerns about unintended genetic consequences, consent for future generations, and the possibility of genetic discrimination (Doudna & Sternberg, 2017).

Human subject review boards are now being asked to review studies that push the boundaries of what is ethically permissible. Unlike traditional biomedical research, where risks are often physical and immediate, neurotechnology and genetic studies introduce long-term, possibly irreversible consequences. This means IRBs must consider not just the impact on current participants, but also the ethical ramifications for future generations and broader society.

III. The Role of IRBs in Adapting to Technological Changes

IRBs have always been tasked with protecting the rights and welfare of research participants, ensuring that studies adhere to ethical principles such as respect for persons, beneficence, and justice (Belmont Report, 1979). However, as research methodologies evolve in the face of rapid technological advancements, IRBs must also evolve. Emerging digital tools, AI-driven research, and new biomedical innovations introduce challenges that traditional ethical frameworks were not designed to address. To remain effective, IRBs must adopt new strategies, expand their expertise, and refine their oversight processes to balance scientific progress with participant protection.

Updating Ethical Guidelines to Address New Research Methodologies

Many of the ethical guidelines that IRBs rely on today were established in an era when research was conducted primarily in controlled environments such as laboratories and clinics. However, modern research is increasingly digital, decentralized, and data-driven, requiring IRBs to rethink how ethical principles apply to new methodologies.

  • Reassessing Risk in Digital Research: Traditionally, IRBs have categorized research risks as physical, psychological, or social. However, digital research introduces informational risks, such as data breaches, re-identification of de-identified data, and AI-driven profiling (Ohm, 2010). IRB for researchers must now evaluate how these risks compare to more conventional harms and ensure that researchers implement adequate safeguards.
  • Developing Guidelines for AI and Machine Learning Research: Many IRBs are unfamiliar with the dynamics of machine learning, yet they are being asked to approve studies involving AI-driven decision-making. Ethical concerns such as algorithmic bias, lack of explainability, and automated decision-making require new review criteria (Mittelstadt et al., 2016).
  • Revising Standards for Remote and Digital Consent: With the rise of virtual research, IRBs must develop efficient, stress-free service for ensuring that digital consent processes, such as e-signatures, video-assisted consent, or interactive chatbot explanations are as robust as traditional in-person consent (Nebeker et al., 2019).
Balancing Innovation with Risk Mitigation

A core challenge for IRBs is finding the balance between allowing innovative research to proceed while minimizing potential risks to participants. Overly cautious IRBs can delay or even prevent important scientific advancements, while lax oversight can expose participants to harm. Striking the right balance requires a more direct engagement approach to risk assessment that accounts for both potential benefits and ethical dynamics.

  • Implementing Proportional Review Processes: Some IRBs have begun adopting a risk-based review approach. Studies involving minimal risks (such as anonymous online surveys) receive expedited approval, while high-risk studies (such as AI-driven medical diagnostics) undergo more intensive scrutiny (Fiske et al., 2019).
  • Encouraging Ethical-by-Design Research: Instead of acting as regulatory gatekeepers, IRBs should encourage researchers to embed ethical considerations into their study designs from the outset. This includes requiring researchers to conduct fairness assessments for AI models, implement privacy-enhancing technologies, and design transparent consent processes.
  • Facilitating Ethical Innovation: Some IRBs have begun eliminating obstacles by establishing ethics consultation services where researchers can seek early guidance on ethical challenges before formally submitting their proposals (Emanuel et al., 2000). This proactive approach can prevent ethical issues from derailing studies later in the review process.
Enhancing IRB Expertise Through Interdisciplinary Collaboration

One of the biggest obstacles IRBs faces in adapting to technological change is a knowledge gap. Many institutional IRB members are trained in traditional bioethics and social science methodologies, but fewer have expertise in emerging fields like AI ethics, cybersecurity, or digital privacy. As research becomes more interdisciplinary, IRBs must expand their expertise to keep pace with new ethical dilemmas.

  • Incorporating Technology and Data Science Experts into IRBs: Some institutions have started including computer scientists, AI ethicists, and data privacy experts on IRBs to help evaluate studies involving advanced technology (Leslie, 2019). Their expertise is crucial in assessing risks such as algorithmic bias, predictive analytics, and cybersecurity vulnerabilities.
  • Developing Specialized Training Programs for IRB Members: IRBs should offer ongoing education on topics like big data ethics, AI transparency, and emerging digital research methods. Training programs, online courses, and workshops can help IRB members stay informed about new ethical challenges.
  • Consulting External Ethics Boards: In particular, complex cases such as studies involving brain-computer interfaces or genetic editing; IRB for researchers may need to consult specialized ethics committees that focus exclusively on those fields (Yuste et al., 2017).
Using Technology for More Efficient and Effective IRB Processes

Just as research methodologies are evolving, so too should the administrative processes that IRBs rely on. Many IRBs still operate with paper-based review systems and lengthy deliberation periods that slow down research approval. Using technology can make the review process more efficient, transparent, and adaptable.

  • Using AI and Automation for Initial Screening: AI-powered tools can assist IRBs by automatically flagging high-risk research protocols, identifying missing consent components, and checking for compliance with ethical guidelines (Meyer, 2020). This would allow IRBs to focus their efforts on more challenging ethical deliberations.
  • Implementing Digital Ethics Review Platforms: Some institutions are moving toward online platforms where researchers can submit proposals, track the review process, and receive feedback in real time. This reduces administrative burdens and improves communication between researchers and IRBs.
  • Adopting Blockchain for Secure Data Tracking: Blockchain technology has been proposed as a method for tracking consent agreements, ensuring data integrity, and improving transparency in clinical trials (Kshetri, 2017). Through blockchains, IRBs could better monitor compliance with ethical standards and prevent data manipulation.

By modernizing their own processes, IRBs can become more agile and better equipped to handle the ethical dynamics of contemporary research. While these changes require effort and adaptation, they are essential to maintaining the delicate balance between protecting research participants and fostering scientific progress. By embracing a forward-thinking and interdisciplinary approach, IRBs can continue to serve their vital role in ensuring that research remains both innovative and ethically responsible. Let’s now look at the regulatory and policy considerations and how regulatory bodies can adapt policies to address modern research risks while still encouraging scientific progress.

IV. Regulatory and Policy Considerations

As research evolves with rapid technological advancements, the policies and regulations governing human subject protections must evolve, too. Historically, ethical guidelines like the Belmont Report (1979) and regulations such as the Common Rule (45 CFR 46) in the U.S. have provided the foundation for IRB oversight. These frameworks emphasize respect for persons, beneficence, and justice, but they were created in an era when research was conducted through in-person trials, physical data collection, and direct participant interactions. Today, with AI-driven studies, decentralized clinical trials, and big data research, these regulations are being stretched to their limits.

The challenge now is how regulatory bodies, such as the Office for Human Research Protections (OHRP), the Food and Drug Administration (FDA), and international entities like the European Data Protection Board (EDPB), can adapt policies to address modern research risks. This section explores the gaps in current regulations, the need for updated ethical frameworks, and the push for global harmonization in research governance.

Current Limitations of Existing Regulations (e.g., Common Rule, GDPR)

Many of the policies that institutional IRBs rely on today were designed before digital data, AI, and cloud-based research became commonplace. Take, for example, the Common Rule, which governs federally funded research in the U.S. While its 2018 update introduced improvements such as broad consent for the use of biospecimens in secondary research; it still doesn’t adequately address some of the biggest challenges researchers face today.

One major gap involves big data and AI-driven research. The Common Rule assumes that research data can be “de-identified” to protect participants’ privacy. However, modern machine learning algorithms can often re-identify anonymized data by cross-referencing multiple datasets, making privacy protections more fragile than they once were (Ohm, 2010). This means that research studies using AI to analyze large-scale health records, social media interactions, or wearable device data could unknowingly put participants at risk, even if they technically follow current regulations.

Similarly, the GDPR, Europe’s landmark data privacy law, has strict rules on informed consent and data processing. However, it presents challenges for researchers conducting international studies, as data-sharing restrictions between the U.S. and the EU can slow down research collaborations. While GDPR prioritizes data security, it doesn’t always account for the realities of scientific research, where anonymization isn’t foolproof, and informed consent in AI-driven studies is complex (Floridi, 2018).

The Need for New Ethical Frameworks to Govern Emerging Research Paradigms

As technology transforms how research is conducted, there’s growing recognition that new ethical frameworks that go beyond traditional biomedical research guidelines are needed. Some scholars argue that research ethics must expand beyond just human subject protections to consider broader societal and technological impacts (Metcalf & Crawford, 2016).

For example, AI-driven research raises questions about algorithmic accountability. If a study uses AI to predict mental health disorders or recommend medical treatments, and that algorithm makes flawed predictions that harm participants, who is responsible? Unlike a traditional clinical trial, where a researcher’s intervention is clear, AI models are often black boxes; making it difficult to assign responsibility when things go wrong (Mittelstadt et al., 2016).

Moreover, neurotechnology and genetic research introduce ethical challenges that weren’t on regulators’ radars even a decade ago. If a research participant agrees to have their genome edited in an experimental study, what happens if unexpected side effects emerge years later? Or, in the case of brain-computer interfaces, what protections exist against potential hacking or manipulation of neural data? These are ethical gray areas that existing regulations do not fully address, prompting calls for new, technology-specific research governance models (Yuste et al., 2017).

One promising approach is the Ethical AI Guidelines proposed by the European Commission, which emphasize transparency, fairness, and human oversight in AI research. Similarly, the WHO has begun drafting global recommendations for the ethical use of gene editing, recognizing that the potential for misuse, especially in human enhancement applications, is real. However, these guidelines are still largely voluntary, leaving it up to IRBs to decide how to interpret them in research oversight.

Global Harmonization of Ethical Standards in Tech-Driven Research

One of the biggest challenges in regulating modern research is that science is global, but regulations are fragmented. A study that collects biometric data through an AI-powered health app may have participants in the U.S., Europe, and Asia; all regions with different data protection laws. This creates regulatory headaches for researchers and IRBs, who must navigate a maze of compliance rules that don’t always align (Kalkman et al., 2019).

For example, a clinical trial using wearable devices to track patient vitals may face stricter data privacy regulations in the EU than in the U.S. Similarly, research on AI-driven hiring algorithms may be legal in one country but considered discriminatory in another. Without global research ethics harmonization, independent IRBs are left to make judgment calls on studies that cross multiple jurisdictions, often without clear legal precedent.

To address this, international research organizations are pushing for standardized ethical review processes. The OECD (Organization for Economic Cooperation and Development) and UNESCO have both proposed frameworks for ethical AI and human research protections that could provide more consistency across borders. Additionally, some researchers advocate for mutual recognition agreements between countries, allowing studies approved by a human subject review board in one country to be accepted in others; similar to how clinical trial approvals work in the pharmaceutical industry (Vayena et al., 2018).

However, these efforts face resistance due to differences in cultural attitudes toward privacy, autonomy, and data ownership. In some countries, strict individual data rights are prioritized, while others focus more on collective benefits of research. Bridging these ethical and legal divides will require ongoing collaboration between regulators, IRBs, and research institutions worldwide.

In summary, regulatory and policy considerations in research are at a crossroads. While existing frameworks like the Common Rule and General Data Protection Regulation (GDPR) provide important protections, they struggle to keep up with technological change. Emerging ethical concerns, such as AI bias, data re-identification risks, and the long-term consequences of genetic interventions, require new, technology-aware research ethics guidelines.

V. Conclusion

The world of research is evolving at a rapid pace, and IRBs must evolve with it. At Beyond Bound IRB, we understand that ethical oversight requires expertise and care in handling the process, ensuring that researchers can focus on their groundbreaking work without unnecessary delays. Our mission is clear: no roadblocks, just support—helping you navigate the dynamics of IRB approval with a comprehensive, bespoke approach tailored to your specific study.

AI-driven research, genetic innovations, and decentralized trials are pushing the boundaries of traditional ethics. IRBs must provide efficient, stress-free service that adapts to these new challenges. That’s where we come in. Our team of experts works closely with researchers to ensure fast, confident approval through direct engagement, eliminating unnecessary hurdles while maintaining the highest ethical standards.

At Beyond Bound IRB, we eliminate obstacles and foster collaboration to create a clear path for your research. Whether you need assistance with informed consent in digital studies, AI ethics, or data privacy compliance, we provide comprehensive support every step of the way. Through our affiliated network, we streamline multi-institutional approvals, ensuring a smooth and cohesive process.

Unlike rigid, one-size-fits-all solutions, we offer transparent pricing and customized pricing models, making IRB approval accessible and predictable for every research budget. Our goal is simple: to provide efficient, stress-free service that empowers researchers to focus on what they do best—advancing knowledge and innovation. If you’re looking for fast, confident approval with a team that values direct engagement, ethical integrity, and comprehensive support, BeyondBound IRB is your trusted partner. Let us handle the IRB dynamics so you can focus on driving your research forward.

References

Belmont Report (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health & Human Services.

Berg, J. W., Appelbaum, P. S., Lidz, C. W., & Parker, L. S. (2021). Informed consent: Legal theory and clinical practice. Oxford University Press.

Doudna, J. A., & Sternberg, S. H. (2017). A crack in creation: Gene editing and the unthinkable power to control evolution. Houghton Mifflin Harcourt.

Emanuel, E. J., Wendler, D., & Grady, C. (2000). What makes clinical research ethical?. JAMA, 283(20), 2701-2711.

Fiske, A., Prainsack, B., & Buyx, A. (2019). Data ethics and citizen science: Strategies for engagement. Data & Society, 3(4), 567-590.

Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1-8.

Kalkman, S., Mostert, M., Gerlinger, C., van Delden, J. J., & van Thiel, G. J. (2019). Responsible data sharing in international health research: A systematic review of principles and norms. BMC Medical Ethics, 20(1), 21.

Kshetri, N. (2017). Can blockchain strengthen the internet of things?. IT Professional, 19(4), 68-72.

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems. The Alan Turing Institute.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1-35.

Metcalf, J., & Crawford, K. (2016). Where are human subjects in big data research? The emerging ethics divide. Big Data & Society, 3(1), 2053951716650211.

Meyer, M. N. (2020). Practical ethics in the age of artificial intelligence. Nature Machine Intelligence, 2(1), 1-3.

Mittelstadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).

Nebeker, C., et al. (2019). Challenges and opportunities in digital health research ethics. Journal of Empirical Research on Human Research Ethics, 14(1), 3-16.

Nissenbaum, H. (2019). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

O’Connor, C., Friedrich, A., Scior, K., & Forbes, A. (2021). Remote research: Ethical and methodological considerations for online social science studies. Research Ethics, 17(2), 123-140.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

Ohm, P. (2010). Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA Law Review, 57, 1701.

Ohm, P. (2010). Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA Law Review, 57, 1701.

Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The rise of consumer health wearables: Promises and barriers. PLoS Medicine, 13(2), e1001953.

Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLOS Medicine, 15(11), e1002689.

Yuste, R., et al. (2017). Four ethical priorities for neurotechnologies and AI. Nature, 551(7679), 159-163.

Share via
Copy link