In the wake of the controversy over Google’s recent military contract to provide ‘weaponized AI’ to the Pentagon, the company is attempting to clarify its stance on the matter with an outline of ethical principles guiding its work in artificial intelligence. As detailed by CEO Sundar Pichai in a recent blog post, there are seven:
- “Be socially beneficial.”
- “Avoid creating or reinforcing unfair bias.”
- “Be built and tested for safety.”
- “Be accountable to people.”
- “Incorporate privacy design principles.”
- “Uphold high standards of scientific excellence.”
- “Be made available for uses that accord with these principles.”
The principles would seem to eliminate the possibility of the kind of contract implied by ‘weaponized AI’ – the term used by the head scientist of Google Cloud in recently leaked internal communications to describe Google’s contract with the Pentagon. And as if to hammer that point home, Pichai’s blog post also lays out a set of “AI applications we will not pursue”, which includes “[t]echnologies that cause or are likely to cause overall harm,” “[w]eapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and technologies that violate international law, human rights, and norms.
Of course, the claim that Google was contracted to provide ‘weaponized AI’ to the US military was itself questionable in the first place; the project was to provide computer vision technology that some thought could be used to improve drone strikes, but Google has maintained that its technology was only meant to flag certain images for human review for “non-offensive uses”.
In any case, Google’s commitment to these clearly defined ethical principles could help reassure those who were concerned about the Pentagon contract, and the many who are worried about the implications of rapidly advancing AI technology in general.
Sources: The Keyword, Fast Company, The Verge
Follow Us