How do we tackle ethical problems arising in Artificial Intelligence?
- Sabrina Tariq

- Jan 15, 2022
- 2 min read

Artificial Intelligence is praised for its ability to perform the most intelligent tasks that a computer can perform. However, AI can’t recognize whether a person’s eyes are closed if they’re of East-Asian descent. This issue takes a more in-depth look at the facial recognition technology implemented in these devices. Often, AI machines face criticism for their biased and, in some cases, racist technology.
These highly intelligent machines are expected to bring in an estimated revenue of $554 billion by 2024(Forbes). Their rise and use are known as the “fourth industrial” revolution; the changing environment with the implementation of AI, the Internet of Things(IoT), among various other technologies.
Subsequently, the growing use of Artificial Intelligence and machine learning poses the question: How do we achieve ethical use of AI and fairness in their actions? Examining executive leader readiness, representation of women and people of color, and stakeholder governance might help answer this question.
When executive executives take the initiative, rapid change is more likely to occur. White men dominate executive roles, leaving women and people of color with minimal voices. Artificial intelligence is likely to fall behind in new waves of innovation if it isn’t exposed to a wide array of creators or overseers. According to the Pew Research Center, 68 percent of business executives and technology innovators feel that leadership training that establishes innovation principles for their firms is critical for ensuring representation and responsibility in AI systems. A published research journal identified the innovation principles as having social justice and fairness values, as well as the fact that they protect privacy, respect personal autonomy, and are transparent.
The second issue is internal or external biases. The inherent prejudices of the people who built AI enhance AI’s prejudice. Unconsciously, tainted data may represent racism, and gender bias, as well as other societal disparities. Lack of diversity and underrepresentation play a part in data collected for AI technology. For example, AI technologies are given the task of guessing the gender of a face “did much better on male faces than female faces,” according to research by MIT. According to a study conducted by the startup Element AI, only 12% of the participants were female. Researchers from the World Economic Forum estimate that only 22% of AI experts worldwide are female, according to a similar study. Discrimination will continue until the percentage of women in the workplace increases.
Finally, the responsibility for ensuring inclusivity falls to business leaders and policymakers. Accountability and transparency regulations are critical to ensure that harmful outcomes aren’t made. According to an IBM client survey, 91 percent of businesses believe that being able to explain how their AI solution arrived at a decision is crucial.
AI is predicted to achieve at least a 38 percent increase in efficiency by 2030, reducing bias and inaccurate recognition features. Until then, a combined effort by business leaders, executives, and policymakers is needed to open ever-growing technology up to diversity and inclusion.
Sources Cited
Roe, David. “What Is Ethical Artificial Intelligence and Why Is It Important.” Reworked.co, Reworked.co, 25 June 2021, https://www.reworked.co/information-management/why-ethical-ai-wont-catch-on-anytime-soon/#:~:text=Ethical%20AI%20ensures%20that%20the,driving%20cars%20that%20encounter%20accidents.
Vedullapalli, Chaitra. “Council Post: The Strategic Building Blocks of Ethical AI: Representation, Governance and Leadership.” Forbes, Forbes Magazine, 25 Feb. 2022, https://www.forbes.com/sites/forbestechcouncil/2022/02/25/the-strategic-building-blocks-of-ethical-ai-representation-governance-and-leadership/?sh=142dcbc738f0.



Comments