FICO has released a new report that suggests that many organizations are still quite lax about their ethical AI responsibilities. The State of Responsible AI was put together by Corinium, and reflects the feedback of 100 data executives and interviews with AI pioneers.
The biggest problem, according to the report, is that many executives simply do not have a strong understanding of the risks inherent in AI development. Sixty-five percent of the respondents felt that their company would not have been able to explain how its own AI models come to a decision. They also felt that many members of the board and the executive suite did not fully appreciate the importance of ethical considerations, with most (73 percent) indicating that it was difficult to get executives to prioritize responsible AI.
That lack of understanding has a direct impact on AI development, insofar as it makes it more difficult to get the funding needed to implement ethical safeguards. Only 22 percent of the respondents work for a company with an AI ethics board, and 38 percent did not take any precautions to mitigate bias during model development.
On the flip side, a mere 20 percent of companies are actively monitoring their models to make sure they are fair, and there does not seem to be much motivation to improve that number. A mere 39 percent have been told that AI governance is a priority during development, while 28 percent have been given more resources for ongoing monitoring and maintenance. Both figures are significantly lower than the 49 percent that reported an increased investment in AI development more generally.
That apathy even extended to many of the data professionals in the survey. A full 43 percent of the respondents believed that they had no particular responsibility to the people affected by their systems, beyond the obligation to comply with regulatory requirements. In doing so, they essentially passed on responsibility to the government, and seemingly suggested that they cannot be held accountable for biased systems that have an adverse impact on people’s lives.
However, perception does seem to be trending in a more positive direction. Most (55 percent) believed that any AI system needs to meet certain ethical standards, while 63 percent expect ethical considerations to be a factor in their company’s overall strategy moving forward. Fifty-five percent also check for latent bias, even if it is not a step during development.
For its part, FICO warns that businesses with poor AI practices are needlessly placing themselves at risk, since they could be held liable for poor behavior. That could take the form of a reputational hit (and a subsequent loss of business), or a financial penalty should the company fail to meet certain regulatory requirements. FICO argues that ethical best practices can help mitigate risk and exposure, and prepare companies for stricter regulations in the future.
The FICO findings echo a separate study from NTT Data Services, which similarly found that many executives have a limited understanding of ethical AI. In the meantime, the World Economic Forum has organized an Action Alliance to promote ethical AI development.
Follow Us