Improved computer vision technology from Google could make a substantial impact in the fight against digital child sexual abuse material (“CSAM”).
This kind of technology is already in use, leveraging sophisticated artificial intelligence to flag media that may depict the sexual abuse of children. It’s a disturbing but profoundly important application of computer vision technology, allowing investigators and tech companies to flag this kind of content at scale, and potentially sparing human reviewers from at least some of the traumatic work involved in sorting through such material. As such, it’s also very much in line with the ethical guidelines for Google’s AI work that were laid out by the company’s CEO earlier this year.
In announcing its upgraded AI technology, Google asserted that it is able to identify more CSAM content that previously slipped under the radar. “We’ve seen firsthand that this system can help a reviewer find and take action on 700% more CSAM content over the same time period,” the post said.
Google emphasized that it carries out this work in collaboration with expert organizations like the Internet Watch Foundation and other NGOs and tech companies, offering its enhanced AI algorithm for free as part of its Content Safety API. “We will continue to invest in technology and organizations to help fight the perpetrators of CSAM and to keep our platforms and our users safe from this type of abhorrent content,” the company said.
Source: The Keyword
Follow Us