Last updated on
Google has introduced a new update to its search algorithm designed to tackle non-consensual explicit content, particularly focusing on artificially generated images and videos known as “deepfakes.”
In a blog post, the company detailed several updates to its search functionality and content removal processes.
Google has updated its systems to make it easier to remove non-consensual explicit fake content from search results. Once a removal request is approved, the system will work to filter out similar explicit results across related searches for the individual involved.
Additionally, a new scanning system has been introduced to identify and remove duplicate images after the original has been successfully removed from search results.
The search algorithm has been modified to potentially lower the visibility of explicit fake content in various searches.
For queries that seek such content and include people’s names, the system prioritizes displaying non-explicit content, such as news articles.
Websites that have had numerous pages removed due to fake explicit imagery may experience changes in their overall search rankings.
Google reports that these updates have significantly reduced the exposure to explicit image results for certain queries.
The company notes a decrease of over 70% for searches targeting such content.
The problem of explicit fake content goes beyond just search engines. Google plans to work with industry partners and experts to tackle this issue on a broader scale.
Emma Higham, Product Manager at Google, remarked on the update, saying:
“These changes are significant improvements to our protections in Search, but there’s still more to do to address this issue. We’ll continue developing new solutions to help those affected by this content.”
This algorithm update reflects Google’s ongoing efforts to adapt its search functionality in response to the changing landscape of digital content challenges.
Original news from SearchEngineJournal