After a lot of pushback from critics and users alike, Apple is delaying its anti-child abuse measures. 

In August, the tech giant initially announced a new policy that uses technology to spot potential child abuse imagery in iCloud and Messages, but concerns followed it. Experts warned that even though Apple promised user privacy, the technology would ultimately put all Apple users at risk.

On Friday, Apple said it would delay the rollout of the technology altogether to make improvements and fully ensure user privacy. 

“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” Apple said in an updated statement on its website. 

The child sexual abuse material detection technology was supposed to become available later this year in the iOS 15 rollout, but it’s now unclear when, or if, the feature will debut. 

The new technology would work in two ways: first, by scanning an image before it is backed up to the iCloud. If that image matches the criteria of CSAM, Apple would receive that data. The other part of the technology uses machine learning to identify and blur sexually explicit images children receive through Messages.

However, after the new policy was announced, privacy advocates and groups said that Apple is essentially opening a back door that bad actors could misuse. 

Get the Latest Tech News Delivered Every Day