Artificial Intelligence May Amplify Bias, But Also Can Help Eliminate It – Forbes
Why is bias in artificial intelligence (AI) such a vexing problem? That’s the question asked by Karen Hao in a recent issue of MIT Technology Review, who explored the ways bias creeps into even the most highly vetted algorithms.
“We often shorthand our explanation of AI bias by blaming it on biased training data,” she points out. “The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.” It stems from lack of awareness of the downstream impacts of data, imperfect processes, and operating with a lack of social context. Then there’s the grand philosophical quandary — “what the absence of bias should look like.”
While AI is seen as reflecting the biases of its creators or the processes that its creators employed, it also potentially is a tool to remove those biases. Jake Silberg and James Manyika, both with McKinsey, point out that along with reforming AI itself, AI itself can be employed “to identify and reduce the effect of human biases.” They explain how this could happen:
“This could take the form of running algorithms alongside human decision makers, comparing results, and examining possible explanations for differences. Similarly, if an organization realizes an algorithm trained on its human decisions (or data based on prior human decisions) shows bias, it should not simply cease using the algorithm but should consider how the underlying human behaviors need to change.”
This is an effort that can’t be left to AI developers or IT departments. Taking as much bias as possible out AI, and to employ AI as a tool to find and eradicate bias requires that executives, managers and employees from all parts of the enterprise put their heads together to develop best practices, and ethical standards, they state.
Silberg and Manyika recommend the following steps to reducing AI bias:
Raise awareness of where AI is being employed, and where it can help reduce bias. This requires an organizational knowledge of and reviewing how, where and when AI algorithms are deployed. “Given the growing number of use cases where AI can reduce disparities caused by human bias, organizations will need to stay up to date to see how and where AI can improve fairness—and where AI systems have struggled,” Silberg and Manyika state.
Test as frequently as possible. This not only includes employing the right tools and platforms to provide transparency and relevant metrics, but also procedures such as “improving data collection through more cognizant sampling and using internal ‘red teams’ or third parties to audit data and models, as well as proactively engaging with communities potentially affected.”
Understand where human bias persists. AI will assume greater amounts of human decision-making, so it’s important to explore how these decisions are actually made. “AI can help by surfacing long-standing biases that may have gone unnoticed,” the McKinsey authors point out.
Promote humans and machines working together. There are cases where AI can make most decisions, and others where humans need to be involved. Such an approach may be “’human-in-the-loop’ decision making, where algorithms provide recommendations or options, which humans double-check or choose from.”
Invest more in bias research. “Business leaders can also help support progress by making more data available to researchers and practitioners across organizations working on these issues, while being sensitive to privacy concerns and potential risks,” Silberg and Manyika suggest.
Promote greater diversity in the AI field itself. “Many have pointed to the fact that the AI field itself does not encompass society’s diversity, including on gender, race, geography, class, physical disabilities, and more characteristics,” Silberg and Manyika urge. “A more diverse AI community will be better equipped to anticipate, spot, and review issues of unfair bias and better able to engage communities likely affected by bias.”