When people talk about ethical AI inside companies, the conversation often starts with policies. These policies are typically stored in a neat PDF file on a shared drive. They outline big ideas, fairness, responsibility, transparency, and promise that the company will follow them. But for many organizations, that is where the work stops. The principles exist, yet the everyday decisions that shape how AI is designed, trained, tested, and deployed rarely reflect them. Scholars have pointed out that principles alone cannot guarantee responsible outcomes because they often lack the mechanisms needed to guide real action (Mittelstadt, 2019).
This gap between intention and practice is one of the largest challenges in today’s AI landscape. Corporations increasingly depend on AI for hiring, credit scoring, customer service, logistics, and product personalization. These systems influence real people’s lives, and when things go wrong, such as biased decisions, opaque predictions, or unsafe automation, the consequences can be serious. Researchers have shown that even well-meaning organizations struggle to apply ethical ideas consistently because AI development moves fast, involves many teams, and blends technical, business, and social concerns (Jobin et al., 2019). A policy sitting on a shelf cannot keep up with that complexity.
Another issue is that many corporate AI ethics statements look similar because they draw from the same global sets of principles. These principles are important, but they are written at a very high level. They do not tell a product manager how to handle a dataset that may disadvantage certain users, or guide an engineer who needs to decide which metrics to track when tuning a model. As Floridi and Cowls (2019) argue, organizations need frameworks that translate values into concrete steps, steps that can be measured, audited, and repeated. Without this translation, employees are left to interpret ethics on their own, which leads to inconsistency.
Moving from policy to practice also matters for trust. Customers today are more aware of the ways AI affects them, and they expect companies to explain how their systems work and how risks are managed. Governments around the world are introducing new regulations that require more than a list of principles; they demand evidence of responsible processes, documentation, and ongoing monitoring. This shift means companies cannot treat ethical AI as a branding exercise. It must become part of how they operate every day.
Ultimately, change must originate from within the organization. Ethical AI becomes real when teams have the tools, skills, and support to put principles into action. That includes training employees, building oversight structures, sharing responsibility across departments, and creating feedback loops when something goes wrong. As scholars note, an ethical framework is only effective when it is integrated into workflows and connected to incentives, resources, and accountability (Morley et al., 2020). In other words, ethics must be lived, not laminated.
This article explores how corporations can make that shift. Instead of treating AI ethics as a set of ideals, companies can build practical frameworks that guide decisions, reduce risks, and strengthen trust, both inside and outside the organization.
1. Translating Ethical AI Principles into Actionable Corporate Standards

Many companies begin their ethical AI journey by signing onto broad principles such as fairness, transparency, accountability, and privacy. These ideals look admirable on paper, but they often fail to guide the day-to-day decisions that teams must make. Scholars point out that ethical principles are usually too general to direct real behavior, which is why organizations often struggle to apply them consistently (Morley et al., 2021). To make ethics meaningful, companies need standards that are specific enough to shape practices but flexible enough to adapt to different products, markets, and users.
Why Ethical Principles Are Difficult to Operationalize
The first challenge comes from the nature of ethical principles themselves. Most of them were written to be universal and high-level. They were designed to apply across industries and cultures, which means they avoid giving concrete instructions. For example, saying “AI systems should be fair” does not tell a data scientist which fairness metric to use or what trade-offs to prioritize. Even within academia, fairness has dozens of competing definitions, and choosing among them depends on context (Mehrabi et al., 2021). Without internal standards that give guidance, teams often default to whatever is easiest or most familiar.
Misalignment Between Ethical Ideals and Business Processes
Another challenge is that ethical principles do not automatically align with business processes. A product team might be under pressure to ship quickly, while a risk team might want more time to check for potential harms. A policy document cannot resolve those tensions by itself. Companies need to create standards that assign responsibilities, define quality thresholds, and describe what “good enough” looks like for different stages of development. As Stahl et al. (2022) note, organizations must embed ethics into their operational structure so that ethical decision-making becomes part of routine practice rather than an afterthought.
Turning Principles Into Concrete, Product-Level Standards
A practical way to start is by translating each principle into a set of internal rules or expectations. For instance, transparency might become a requirement for documentation, explainability tests, and user-friendly disclosures. Fairness might translate into biased assessments, demographic audits, and guidelines for representative datasets. Accountability could lead to clear ownership structures, escalation procedures, and model-specific review paths. These standards help move the conversation from “what we believe” to “what we actually do.”
Importantly, this work cannot be done in isolation. Ethical principles touch on legal risk, technical feasibility, user experience, and social impact. That means the people defining standards must include engineers, designers, ethicists, legal teams, and, ideally, voices from affected communities. Research shows that multi-disciplinary collaboration leads to clearer and more realistic ethical frameworks because different teams bring different concerns and expertise (Macnish & Stahl, 2018). When standards are created with input from across the company, people are more likely to trust them and follow them.
Ensuring Standards Evolve Over Time
Companies must also think about how these standards will evolve. AI systems change over time, and so do the risks they introduce. New laws, new technologies, and new public expectations can quickly make old standards outdated. To keep up, organizations need revision cycles, feedback mechanisms, and review bodies that can adjust standards as circumstances shift. Experts argue that ethical AI frameworks must be “living” structures rather than fixed documents (Fjeld et al., 2020). This adaptability helps companies stay ahead instead of reacting only when a problem becomes public.
Finally, internal standards should be tied to measurable outcomes. Without metrics, teams cannot know whether they are meeting expectations. Metrics might include the number of models passing bias checks, the percentage of systems with full documentation, or how often teams run explainability evaluations. These measurements do not solve ethical challenges by themselves, but they make progress visible and actionable. Metrics also help leaders understand where to invest resources and where additional support or training is needed.
Translating principles into practical corporate standards is not glamorous work, but it is foundational. It allows ethical values to be felt in everyday choices, from how data is cleaned to how models are deployed. When a company can point to clear standards, supported by evidence and consistent processes, ethics becomes not just an aspiration but a workable part of how it builds technology.
2. Designing Organizational Structure and Governance for Ethical AI

Once a company has a set of ethical standards, the next challenge is figuring out who is actually responsible for putting them into practice. This is where governance comes in. Governance is not just about committees or paperwork; it is the set of structures, roles, and decision-making paths that determine how ethical concerns are handled inside the organization. Researchers consistently note that responsible AI cannot succeed without strong internal governance because it provides the stability and clarity needed to manage complex technologies (Umbrello & van de Poel, 2021).
Ethical AI as a Shared Organizational Responsibility
A common mistake companies make is assuming that ethical training can be handled by a single team. In reality, AI touches many parts of a business. Engineers build models, product managers set goals, compliance teams track risks, legal teams monitor regulations, and executives push for growth. Ethical challenges emerge from the interaction of all these groups, not from one department alone. As Dignum (2019) explains, ethical AI requires shared responsibility and structures that allow people with different expertise to work together. Without cooperation, problems fall through the cracks.
A strong governance model usually starts with clear roles and decision rights. Many companies appoint an AI Ethics Lead, Chief AI Ethics Officer, or similar role to coordinate efforts across the organization. This person acts as a central point for guidance, escalation, and alignment. But they cannot work alone. They need committees or working groups for direct management that include representatives from engineering, design, data science, compliance, security, user research, and sometimes external experts. These groups help evaluate risk, review models, and make decisions that reflect both technical realities and social impact.
Centralized, Distributed, and Hybrid Governance Models
The structure of governance can vary depending on a company’s size and culture. Some organizations use a centralized model, where a core ethics office sets policies, runs reviews, and signs off on high-risk projects. Others use a distributed approach, where each product or business unit has its own ethics champion or review pathway. There is also a hybrid approach that mixes both. Studies show that there is no single “best” structure; what matters is that the responsibilities are clear, decisions are documented, and teams know when and how to escalate concerns (Shneiderman, 2022).
Governance also needs to fit into the daily workflow. Ethical reviews should not be treated as a final hurdle at the end of development. Instead, they should be woven into early planning, data collection, model design, testing, and post-deployment monitoring. This makes the process more actionable and less disruptive. As Brundage et al. (2020) emphasize, responsible AI requires continuous oversight rather than one-time approvals. When teams know that ethics is part of the regular process, they are more likely to take it seriously and less likely to view it as a bureaucratic burden.
Another essential piece of governance is accountability. Employees should know who is responsible for which decisions, both when everything goes well and when something goes wrong. Accountability systems may include documentation requirements, model registration, risk scoring, or human-in-the-loop checkpoints. These steps make AI systems traceable and easier to audit. Scholars argue that accountability encourages better decision-making because people understand how their actions affect both users and the company’s reputation (Rahwan, 2018).
Internal Transparency and Psychological Safety
Transparency and a clear path inside the organization are just as important as transparency to the public. When governance processes are visible, predictable, and fair, employees feel more confident raising concerns. A healthy governance structure encourages open communication, welcomes questions, and provides safe channels for reporting issues. Without this kind of culture, governance becomes just another formal process that people learn to avoid.
Finally, governance must adapt as AI evolves. New risks emerge, new tools become available, and new regulations appear. This means companies need ongoing evaluation of their governance model. Regular audits, performance reviews, and stakeholder feedback help ensure that governance structures remain effective and aligned with the company’s goals. As Wells and Bedford (2022) point out, AI governance must be flexible and evolving because AI itself is dynamic.
In summary, designing a solid organizational structure for ethical AI is not just a management exercise. It is the backbone that supports responsible development, protects users, and keeps the company aligned with its values. When governance is clear and collaborative, ethical AI becomes not just achievable but sustainable.
3. Embedding Ethical Practices Across the AI Development Lifecycle

Once a company has set up its governance structure, ethics must move into the everyday work of building AI systems. This means that responsible practices cannot be reserved for crisis moments or end-of-project reviews. Instead, they must become part of each phase of the AI development lifecycle, from the moment a problem is defined to the long-term monitoring that happens after deployment. Scholars argue that the lifecycle approach is essential because ethical issues often emerge from routine choices that seem small in the moment but accumulate into large impacts (Mökander & Floridi, 2021).
Ethical AI as a Cross-Functional Responsibility
The process begins with problem definition. Before a single dataset is collected or a model is chosen, teams must ask whether AI is truly the right solution. This step is often skipped because the push to innovate can overshadow reflection. But early questioning helps prevent the development of systems that may unintentionally harm users or reinforce unnecessary automation. Research shows that early-stage reflection reduces downstream risks because it clarifies goals, identifies potential harms, and encourages teams to think about the communities affected by the system (Raji et al., 2020). This makes the rest of the development process more intentional.
Next comes data sourcing and preparation, where many ethical issues first appear. The quality of data determines the quality of the model, and biased or incomplete datasets can create unfair outcomes. Ethical practice at this stage includes reviewing data sources, checking for representativeness, documenting limitations, and ensuring consent and privacy protections are respected. Scholars point out that most AI fairness problems stem from poor data practices rather than model architecture, which is why transparency around data collection is so important (Gebru et al., 2021). Teams need clear guidance on which datasets are appropriate, how much diversity is required, and what to do when gaps appear.
Model Development and Testing
Model development is another critical phase. As engineers build and refine models, they must consider not just accuracy but also fairness, interpretability, and robustness. Ethical practices here may include running bias tests, using interpretable models when decisions affect people directly, or performing adversarial testing to see how the system behaves under stress. There is growing academic support for integrating “ethics-by-design” techniques during model development to help engineers anticipate risks in real time (Liao & Müller, 2023). These built-in checks help teams catch issues early rather than patching them after deployment.
Evaluation and testing carry their own set of responsibilities. Traditional testing focuses on performance metrics, but ethical AI requires expanding that lens. Teams should test models using diverse groups, edge cases, and scenarios that reflect real-world complexity. Scholars have emphasized the value of “sociotechnical testing,” which looks at how models interact with human behavior, cultural norms, and social systems (Selbst et al., 2019). This helps reveal unexpected risks, such as systems that work well in controlled environments but behave poorly in the field.
Integrating Ethics Into Daily Workflows
Deployment is where ethical considerations meet practical constraints. Before a system goes live, teams must confirm that appropriate safeguards are in place. This might include human oversight, user instructions, limitations on how the tool can be used, or clear channels for feedback and issue reporting. Deploying AI ethically also involves transparency: users should understand what the system does, what its limitations are, and how they can challenge its decisions if needed. Transparency builds trust and gives users a sense of agency rather than leaving them in the dark.
Finally, ethics does not end once a model is launched. AI systems evolve as real-world conditions change, and ongoing monitoring is necessary to catch new issues. Drift in data, shifts in user behavior, and unexpected interactions can all affect performance. Ethical monitoring includes tracking complaints, analyzing error patterns, updating models, and regularly reviewing whether the system is still delivering fair and safe outcomes. Researchers stress that without long-term monitoring, even well-designed systems can become harmful over time (Amershi et al., 2019). Continuous attention is what keeps AI aligned with both business goals and societal expectations.
Embedding ethics throughout the AI lifecycle is not about slowing teams down; it is about helping them build systems that work reliably, fairly, and safely. When ethical practices are integrated into each phase, they become normal parts of the workflow rather than separate tasks. This reduces risk, strengthens trust, and helps companies deliver AI that users can feel confident relying on.
4. Building Organizational Culture and Capability for Ethical AI

Even the best policies and governance structures cannot succeed without a culture that supports them. Culture is the shared set of beliefs, habits, and expectations that guide how people behave when no one is watching. When a company wants to practice ethical AI, it must build a culture where employees feel responsible for the impact of the systems they help create. Scholars argue that ethical technology work becomes meaningful only when employees believe ethics is “part of their job,” not an optional extra (Sengers et al., 2018).
Leadership as the Foundation of Ethical Culture
Creating this kind of culture begins with leadership. People take cues from what leaders pay attention to, whether it is speed, innovation, compliance, or responsibility. When leaders consistently talk about ethics, fund ethics initiatives, and reward thoughtful decision-making, employees feel encouraged to do the same. But when ethics is mentioned only in crisis moments, workers quickly learn that it is not truly valued. Research shows that visible leadership support is one of the strongest predictors of whether ethical frameworks become embedded in daily practice (Kaiser & MacInnis, 2019). This includes leaders asking questions about data quality, fairness, and user impact in meetings, not just technical performance.
Training also plays a major role in building ethical capability. Ethical AI training should not be limited to a single introductory session or a PDF checklist. People in different roles need different forms of preparation. Engineers may need training on bias detection tools, model documentation, and transparency techniques. Product managers may need help understanding user impacts, risk trade-offs, and regulatory requirements. Designers may need guidance on inclusive design and accessible interfaces. Studies show that training tied to real examples and practical tasks is far more effective than abstract lectures because it gives people the tools to act ethically, not just the language to talk about it (Vakkuri & Abrahamsson, 2021).
Another key part of culture-building is creating opportunities for employees to raise concerns without fear. If speaking up feels risky, ethical problems remain invisible until it is too late. Companies need safe reporting channels, anonymous feedback options, and leaders who treat questions as valuable, not disruptive. Scholars emphasize that psychological safety, the feeling that it is acceptable to ask questions and admit uncertainty, is essential for preventing ethical blind spots in AI development (Edmondson, 2019). When people feel safe, they are more willing to point out risks early.
Everyday Rituals That Reinforce Ethics
Companies can also build culture through everyday rituals. Simple practices, such as starting project kickoffs with a brief discussion of user impact or reviewing ethical checklists during sprint planning, can normalize ethical thinking. “Ethics champions” programs, where trained volunteers support their teams with ethical questions, are another technique that helps bridge the gap between policy and practice. Research shows that peer support networks make ethics feel more accessible because people are more comfortable asking colleagues for help than escalating to executives (Bietti, 2020).
Incentives also matter. When promotions, bonuses, and recognition focus only on speed and output, teams naturally deprioritize anything that slows them down, including ethics. If, however, managers evaluate teams based on thoughtful risk assessment, documentation quality, or responsible deployment, employees begin to understand that careful work is valued. As Hagendorff (2020) notes, ethics becomes real only when it is tied to organizational rewards and consequences.
Aligning Incentives With Ethical Behavior
Cross-functional collaboration strengthens culture even further. Ethical AI depends on blending technical knowledge with social awareness, legal expertise, and user understanding. Bringing these perspectives together helps teams see problems they might have missed on their own. Workshops, review sessions, and shared tools help break down silos. Research shows that multidisciplinary collaboration improves both ethical outcomes and employee confidence because it reduces uncertainty and distributes responsibility (Holstein et al., 2019).
Finally, culture must evolve along with technology. New AI techniques bring new risks, and employee skills must grow to match them. This means updating training programs, refreshing guidelines, and sharing lessons learned from past projects. Effective companies treat ethical capability building as an ongoing process, not a one-time initiative. Over time, the culture becomes resilient, able to adapt to change while staying anchored in the company’s values.
Building organizational culture and capability for ethical AI is a long-term effort. But it pays off. When people understand the impact of their work, feel empowered to ask questions, and have the skills to address problems, ethics becomes a natural part of how the company builds and deploys technology. The result is stronger products, more trust from users, and fewer crises that damage reputation and morale.
5. Measuring Ethical AI Performance and Ensuring Accountability

Once ethical practices are embedded into the development process, companies need a way to know whether those practices are actually working. Good intentions do not guarantee good outcomes, and without measurement, it becomes impossible to tell if an AI system is behaving fairly, safely, or consistently in the real world. Scholars emphasize that ethical AI requires both evaluation and accountability because these two elements create the feedback loop that drives improvement (Morley et al., 2021). In other words, you cannot manage what you do not measure.
Deciding What to Measure
The first step is deciding what to measure. Traditional AI metrics, accuracy, precision, and recall, tell only part of the story. Ethical AI requires looking at how a system behaves across different groups, contexts, and conditions. Companies may need to track bias-related metrics, error rates for marginalized groups, transparency indicators, or the frequency of human overrides. Researchers note that ethical risks often show up at the “edges,” among smaller or less represented user groups, which makes broad averages misleading (Buolamwini & Gebru, 2018). Choosing metrics that capture this nuance is essential.
Qualitative indicators also play a role. Not every ethical concern can be captured numerically. For example, whether a system is understandable to users, whether it respects cultural norms, or whether people feel comfortable interacting with it cannot be reduced to a single statistic. Interviews, user feedback, testing panels, community consultations, and internal reflections all provide valuable insight. Studies show that combining quantitative and qualitative measures gives a more holistic understanding of risk because it blends technical performance with lived experience (Fjeld et al., 2020). This mixed approach helps companies avoid blind spots.
Establishing Regular Assessment Mechanisms
Once metrics are defined, companies need mechanisms to assess them regularly. This usually involves audits, structured evaluations that look at how a system was built, tested, and deployed. Audits can be internal or external, but both serve important functions. Internal audits help teams catch issues early and maintain continuous improvement. External audits add credibility, especially when products affect sensitive areas like hiring, healthcare, or credit scoring. Scholars argue that independent audits increase trust because they demonstrate that a company is willing to have its systems scrutinized by a neutral party (Raji et al., 2022).
Documentation is another important part of measurement. Teams need a clear record of what decisions were made, why they were made, and what risks were considered. Model cards, data sheets, decision logs, and version histories all support transparency. Documentation also creates a trail that auditors, regulators, or internal reviewers can follow. Researchers note that documentation not only improves accountability but also strengthens organizational memory, making it easier for teams to learn from past mistakes and successes (Mitchell et al., 2019). Without it, companies risk repeating the same errors with every new system.
Ensuring accountability also means making responsibilities explicit. People must know who is responsible for evaluating models, who signs off on risks, and who is expected to intervene when issues arise. Some organizations use RACI charts (Responsible, Accountable, Consulted, Informed). Others rely on ethics review boards or model risk committees. Regardless of the structure, clarity matters. When accountability is ambiguous, problems get ignored or handled too late. Scholars emphasize that accountability structures help align incentives and ensure that ethical decisions do not depend solely on individual goodwill (Rahwan, 2018).
Transparency With Users and Stakeholders
Transparency with users and external stakeholders further strengthens accountability. People affected by AI systems deserve to know how those systems work, what data they rely on, and what their limitations are. Companies can share public transparency reports, publish model summaries, or create channels for user appeals. These actions allow people to challenge decisions that seem unfair or incorrect, which in turn gives companies valuable information about how their systems perform in the real world. Research shows that transparency encourages trust and reduces the chances of public backlash when issues arise (Wieringa, 2020).
Monitoring must also continue after deployment. AI systems rarely stay static. Data drifts, user behavior evolves, and new contexts emerge. A model that was fair and accurate at launch may produce different results months later. Ongoing monitoring, such as periodic bias tests, performance reviews, or automated alerts, helps ensure that systems remain aligned with ethical standards over time. Experts warn that without ongoing monitoring, even well-designed systems can degrade and produce harmful outcomes unexpectedly (Amershi et al., 2019). Maintaining accountability after deployment protects both users and the company.
Ultimately, measuring ethical AI performance and ensuring accountability is not about policing employees or slowing innovation. It is about understanding how systems behave, spotting issues early, and building AI that people can trust. When companies track the right metrics, run meaningful audits, maintain documentation, and communicate openly, ethical AI becomes measurable, manageable, and sustainable. Accountability strengthens culture, supports good governance, and shows users that the company takes its responsibilities seriously.
Conclusion

As organizations expand their use of AI, the companies that will stand out are the ones that treat ethical responsibility as a daily practice, not a slogan. Trust is earned when systems are built with expertise and care in handling the process, supported by structures that remove confusion and eliminate obstacles. Scholars remind us that true trust forms through consistent behavior, not public statements (O’Neil, 2016). When a company’s actions match its values, users begin to see that ethics is not an accessory; it is a commitment.
Turning ethical AI into real organizational value requires more than principles. It demands a comprehensive, bespoke approach that embeds responsibility into workflows, measurements, and accountability systems. Companies that do this well often enjoy better product quality, stronger reputations, and more loyal customer relationships. Research shows that responsible technology drives competitive advantage because users naturally choose solutions that offer a clear path, reliability, and fast, confident approval in sensitive domains like healthcare and finance (Wright & Schultz, 2018). Ethical AI becomes an asset, not a burden.
This is also why organizations benefit from partnering with dedicated experts who can provide ethics, direct engagement, and an affiliated network to guide them through complex ethical and regulatory environments. At Beyond Bound IRB, we help research teams and corporate innovators move from intention to implementation with efficient, stress-free service, transparent pricing, and customized pricing for unique project needs. Our team ensures there are no roadblocks, just support, allowing you to operationalize ethical oversight with confidence. And for teams who want to deepen their internal capabilities, IRB Heart offers training that helps leaders and practitioners foster collaboration, strengthen oversight skills, and build a responsible AI culture from within.
Finally, operationalizing trust is an ongoing process. Ethical challenges evolve, user expectations shift, and regulations change. The organizations that thrive are those willing to adapt, monitor, and refine their systems continuously. Scholars emphasize that responsible AI is never fully “finished”; it must be revisited and strengthened over time (Floridi & Taddeo, 2020). With strong partners and reliable guidance, companies can transform ethical responsibility into a sustained source of resilience, innovation, and long-term value.
If your organization is ready to turn ethical AI into a strategic advantage, supported by clarity, expertise, and a truly comprehensive, bespoke approach, Beyond Bound IRB and IRB Heart are here to guide the journey. Together, we help you build structures that last, ensure accountability, and create the kind of trust that moves organizations forward.
References
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1, 501–507.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26, 2141–2168.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based AI Guidelines. Berkman Klein Center.
Macnish, K., & Stahl, B. C. (2018). Ethics of AI and Big Data: A Case Study Approach. Springer.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys.
Morley, J., Elhalal, A., García, F., Kinsey, L., & Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Philosophy & Technology, 34, 1355–1376.
Stahl, B. C., Rodrigues, R., Santiago, N., & Macnish, K. (2022). Implementing Responsible Artificial Intelligence: A Framework and Toolset for Public Administrations. AI & Society.
Brundage, M., et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv.
Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer.
Rahwan, I. (2018). Society-in-the-loop: Programming the Algorithmic Social Contract. Ethics and Information Technology, 20, 5–14.
Shneiderman, B. (2022). Human-Centered AI. Oxford University Press.
Umbrello, S., & van de Poel, I. (2021). Mapping Value Sensitive Design onto AI Ethics Guidelines. AI and Ethics, 1(2), 257–265.
Wells, P., & Bedford, D. (2022). AI Governance: A Holistic Approach to Governing Artificial Intelligence. Journal of Responsible Technology, 10.
Amershi, S., et al. (2019). Guidelines for Human-AI Interaction. Proceedings of the CHI Conference on Human Factors in Computing Systems.
Gebru, T., et al. (2021). Datasheets for Datasets. Communications of the ACM, 64(12), 86–92.
Liao, T., & Müller, V. C. (2023). Ethics-by-Design in Artificial Intelligence. AI and Ethics, 3(1), 45–57.
Mökander, J., & Floridi, L. (2021). Ethics-Based Auditing of Automated Decision-Making Systems. AI and Ethics, 1, 465–482.
Raji, I. D., et al. (2020). Closing the AI Accountability Gap. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
Selbst, A. D., et al. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency.
Bietti, E. (2020). From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy. Proceedings of the Conference on Fairness, Accountability, and Transparency.
Edmondson, A. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30, 99–120.
Holstein, K., et al. (2019). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? CHI Conference on Human Factors in Computing Systems.
Kaiser, S., & MacInnis, D. (2019). Managing the Ethical Culture of Organizations. Journal of Business Ethics, 156, 1–4.
Sengers, P., et al. (2018). The Politics of Designing AI Systems. AI & Society, 33, 1–10.
Vakkuri, V., & Abrahamsson, P. (2021). The Key Considerations in Ethics of AI: A Systematic Literature Review of Ethical AI. IEEE Access.
Amershi, S., et al. (2019). Guidelines for Human-AI Interaction. CHI Conference on Human Factors in Computing Systems.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the Conference on Fairness, Accountability, and Transparency.
Fjeld, J., et al. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based AI Guidelines. Berkman Klein Center.
Mitchell, M., et al. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency.
Morley, J., Elhalal, A., García, F., & Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Philosophy & Technology, 34, 1355–1376.
Rahwan, I. (2018). Society-in-the-loop: Programming the Algorithmic Social Contract. Ethics and Information Technology, 20, 5–14.
Raji, I. D., et al. (2022). Outsider Oversight: Designing External Audits for Accountability in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency.
Wieringa, M. (2020). What to Account for When Accounting for Algorithms: A Systematic Literature Review. ACM Conference on Fairness, Accountability, and Transparency.
Floridi, L., & Taddeo, M. (2020). What Is Data Ethics? Philosophical Transactions of the Royal Society A, 374(2083).
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
Wright, D., & Schultz, E. (2018). Ethics in Artificial Intelligence: Designing for Trust. AI & Society, 33, 575–587.

