Update on AI for Good: Making Business Sense, and a Cautionary Note
By AI Trends Staff
AI for Good is a United Nations platform that fosters a dialogue on the beneficial use of AI by developing concrete projects. Progress is being made in specific areas, public relations around the topic is positive, and a cautionary note was recently sounded by a respected researcher.
IBM launched Science for Social Good in 2016 aiming to apply technology to 17 issues highlighted by the United Nations as Sustainable Development Goals. These included reducing poverty, inequality, and damage to the environment, while raising standards of healthcare and education around the world.
IBM recently announced progress made across 15 of those 17 issues. In a recent account in Forbes, author Bernard Marr reported on interviews with IBM fellow Aleksandra Mojsilovic and principal research Kush Varshnev about the work.
The idea had its genesis in 2013 or so. Then IBM had “3,000 researchers around the world, and we decided there had to be a way to leverage these skills more broadly and in a coherent way… it didn’t seem right that we were trying to solve these problems in our spare time,” Mojsilovic stated in the Forbes article.
Aleksandra Mojsilovic, IBM Fellow, IBM Research AI
IBM learned some lessons fighting the Ebola outbreak of 2014-2016 in West Africa. “We learned that having a program that’s really focused on creating tools or technology won’t work without the participation of those who really work with these problems,” Mojsilovic stated.
Initiatives were subsequently launched to address fairness around risk assessment in financial services, health insurance in the US, and mobile-based money lending programs in east Africa. The goal was to use technology to mitigate the risk of bias leading to unfair outcomes.
Examples of AI for social good are tracked by McKinsey in an annual report. Under the category of equality and inclusion, one use case was based on the work of Affectiva, spun out of the MIT Media Lab, and Autism Glass, a Stanford research project. The project used AI to automate the recognition of emotions and provide social cues to help individuals on the autism spectrum interact in social situations.
Mwila Kangwa, CEO, AgriPredict of Zambia, a web and mobile-phone-based agricultural risk management platform built on AI and machine learning, participated in the AI for Good Global Summit held in Geneva in May, 2019. AgriPredict provides farmers with tools to help identify diseases, predict pest infestations and weather conditions. A farmer takes a picture of the suspected diseased plant, and the system provides a diagnosis, options for treatment, and the location of the nearest agro supplier. Farmers can also receive information on weather patterns. Users access the services on smart phone applications and social media including Twitter, Facebook, and Whatsapp. The service can also be accessed via a USSD platform used by cellular telephones for those without a smartphone.
CEO Kangwa told AI Trends, “We are very much active at AgriPredict. We are currently expanding our products and range for crop disease detection.”
A panel on innovative applications of AI in education at the AI for Good Global Summit included representatives of: Minecraft Education, offering an open-world game that promotes creativity, collaboration and problem-solving; and the Connect to Learn public-private partnership from Ericsson, which strives to increase access to quality education, especially for girls, through life skills programs, the integration of technology tools and digital learning resources in schools.
The Chan Zuckerberg Initiative, from Mark Zuckerberg of Facebook and his wife, Priscilla Chan, is engaged in using technology to address a range of challenges including affordable housing. The initiative formed the Partnership for the Bay’s Future. A public-private partnership that aims to protect up to 175,000 households over the next five years, and produce more than 8,000 homes in the next five to 10 years in the Bay Area.
Caution Issued On AI for Good “Beta Testing”
A caution that AI for good projects can often amount to pilot beta testing with unproven technologies, was issued by Mark Latonero in a recent account in Wired. Dr. Latonero is the Research Lead for Human Rights at Data & Society. He is a fellow at Harvard Kennedy School’s Carr Center for Human Rights Policy, Berkeley Law’s Human Rights Center, and USC’s Annenberg Center for Communication Leadership & Policy, where he earned his Ph.D.
Dr. Latonero works on the social and policy implications of emerging technology and examines the benefits, risks, and harms of digital technologies, particularly in human rights and humanitarian contexts.
“Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world,” he stated. Thus, many enter partnerships. IBM’s social good program has 19 partners; Facebook partners with the Red Cross to help find missing people after disasters. “Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about,” Latonero stated.
Read the source articles in Forbes, from McKinsey, and in Wired.