Instagram is trying to curtail bot activity on its eponymous social media platform. To that end, the company is starting to ask users to take video selfies to prove that they are human, and that their account does not belong to a social media bot.
According to Instagram, the video selfies are only used for liveness detection, and do not use any form of facial recognition. If true, that would mean that the videos are not being used for identity verification or authentication, though Instagram is asking people to take videos from multiple angles to confirm liveness. Either way, Instagram indicated that any video selfies collected through the program would be deleted 30 days after being recorded.
At the moment, not every user is being asked to submit a video selfie. Instagram seems to be targeting accounts that have displayed bot-like behavior, such as following a large number of accounts in a short amount of time. Bots on the social media platform are often used to inflate like and follower counts, or to distribute spam and spearhead harassment campaigns.
Asking accounts with suspicious habits for proof of liveness helps ensure that there is a real person at the keyboard. Instagram first introduced video selfies more than a year ago, though some reports have suggested that technical issues slowed the initial rollout. That could explain why the program has not attracted much mainstream scrutiny until relatively recently, as the scope of the video program became more widespread.
Instagram claimed that the video selfies will be subject to human review. Meanwhile, its parent company Meta claimed that it will not store or share any of the data gathered through the program. Meta (formerly known as Facebook) has also announced that it will discontinue its use of facial recognition across most use cases. The tech giant paid $650 million to settle a facial recognition lawsuit in 2020, and Instagram is currently facing its own separate BIPA case.
Source: The Verge