Artificial intelligence software is increasingly being used by HR departments to validate resumes, conduct video interviews, and assess job seekers’ intelligence.
Now, some of America’s largest corporations are joining forces to prevent this technology from producing biased results that could perpetuate or even exacerbate past discrimination.
The Data & Trust Alliance, which was announced Wednesday, has registered major employers in a variety of industries including CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (Facebook’s parent company), Nike and Walmart.
A corporate group is not a lobbying organization or think tank. Instead, he developed a grading and scoring system for artificial intelligence software.
The Data & Trust Alliance worked with corporate and third-party experts to develop a score of 55 questions covering 13 topics and a scoring system. The goal is to identify and combat algorithmic bias.
“It’s not just about embracing principles, it’s about implementing something concrete,” said Kenneth Chenot, co-chair of the group and former CEO of American Express, which has agreed to adopt a set of anti-bias tools.
Companies are responding to concerns, backed by numerous studies, that AI programs may inadvertently produce biased results. Data is the fuel of modern artificial intelligence software, so the data that is sampled and how it is used for inference is critical.
If the data used to train the algorithm is mostly information about white men, the results are likely to be biased towards minorities or women. Or, if the data used to predict company success is based on who has done well for the company in the past, the result may well be an algorithmically amplified version of past bias.
At first glance, neutral datasets combined with others can produce results that differ by race, gender, or age. The group questionnaire, for example, asks about the use of such “proxy” data, including the type of mobile phone, sportswear, and social club membership.
Governments around the world are moving towards adopting rules and regulations. European Union Proposes Regulatory Framework for AI The White House is working on a Bill of Rights on AI.
In a note to technology companies, the FTC warned, “Take responsibility — or be prepared for the FTC to do it for you.”
The Data & Trust Alliance is committed to addressing the potential danger of using powerful algorithms in early workforce decisions, rather than reacting to obvious massive damage, as Silicon Valley has done on issues such as privacy and the spread of disinformation.
“We have to go through the era of ‘acting fast, breaking things and sorting it out later,’” said Mr Chenot, who served on Facebook’s board for two years until 2020.
Corporate America promotes programs for a more diverse workforce. Mr. Chenot, who is currently the chairman of the board of the venture capital firm General Catalyst, is one of the most prominent African Americans in business.
Speaking about the new initiative, Ashley Kasovan, chief executive of the Responsible AI Institute, a non-profit organization that develops certification systems for AI products, said the focused approach and commitment of large companies was encouraging.
“But getting companies to do it on their own is problematic,” said Ms. Kasovan, an Artificial Intelligence advisor to the Organization for Economic Cooperation and Development. “We believe that ultimately it should be done by an independent body.”
The corporate group grew out of conversations between business leaders who realized that their companies in almost every industry were “becoming data and AI companies,” said Mr Chenot. This meant new opportunities, but also new risks.
The group was brought together by Mr Chenot and Samuel Palmisano, co-chairman of the alliance and former CEO of IBM, starting in 2020, referring mainly to CEOs of large companies.
They decided to focus on using technology to support workforce decisions in hiring, promotions, training, and compensation. Senior staff members from their respective companies were appointed to implement the project.
Internal surveys have shown that their companies are adopting AI-driven software in human resources, but most of the technology comes from vendors. And corporate users had little understanding of what data software vendors were using in their algorithmic models, or how those models worked.
To develop the solution, the corporate group brought in its employees from HR, data analysis, legal and procurement, as well as software vendors and external experts. The result is a bias detection, measurement and mitigation system for learning data processing practices and developing HR software.
“Human values are embedded in every algorithm, and this gives us another opportunity to look at it,” said Nuala O’Connor, senior vice president of digital citizenship at Walmart. “It’s practical and fast.”
The assessment program has been developed and refined over the past year. The goal was to apply it not only to large HR software vendors such as Workday, Oracle, and SAP, but also to many smaller companies emerging in the fast-growing field of work technology.
Many of the questions in the anti-bias questionnaire are about data that feeds AI models.
“The promise of this new era of data and AI will be lost if we don’t do it responsibly,” said Mr Chenot.