Google has long preached the gospel of “content written by the people, for the people.” But in a recent update, the search giant quietly wrote its own rules to recognize the rise of artificial intelligence.
In the latest iteration of Google Search’s “Helpful Content Update,” the phrase “written by people” was replaced with a statement that Google regularly monitors “content made for people” to rank its search engine sites.
The new language shows that Google recognizes AI as a highly trusted tool in content creation. But instead of just focusing on distinguishing AI from human content, the leading search engine wants to highlight valuable content that benefits users, regardless of whether it’s created by humans or machines.
Google is currently investing in AI in its products, including an AI-powered news generator service with its own AI chatbot Bard and new experimental search features. Updating its guidelines, then, is also in line with the company’s own strategic direction.
The search leader still seeks to reward original, helpful, and human content that provides value to users.
“By definition, when you use AI to write your content, it gets rehashed from other sites,” Google Search Relations team lead John Mueller said on Reddit.
To SEO or not to SEO?
The implications are clear: repetitive or low-quality AI content still hurts SEO, even as technology advances. Writers and editors still need to play an active role in the content creation process. Lack of human involvement is dangerous because AI models tend to hallucinate. Some of the mistakes can be funny or hurtful, but some of them can cost millions of dollars and even put lives in danger.
SEO, or search engine optimization, refers to strategies aimed at improving a website’s ranking in search engines such as Google. Higher rankings lead to more visibility and traffic. SEO experts have long tried to “beat” search algorithms by optimizing content to match the Google algorithm.
Google seems to be penalizing the use of AI for simple content summarization or rephrasing, and has its own ways of identifying AI-generated content.
“This classifier process is fully automated, using a machine learning model.” Google says, meaning it uses AI to identify good and bad content.
However, part of the challenge is that AI content recognition often relies on imprecise tools. OpenAI itself removed its own AI classifier recently, acknowledging its inaccuracy. AI is difficult to detect because the models are actually trained to “appear” human, so the confrontation between content creators and content discriminators will never end as AI models become more powerful and accurate over time.
Additionally, training AI with AI-generated content generations can lead to model collapse.
Google says it doesn’t try to reproduce AI-generated data, but will recognize it and reward human-written content accordingly. This method is more similar to training a special AI discriminator, where an AI model tries to make something look natural and another model tries to distinguish whether the creation is natural or artificial. This process is already used in generative adversarial networks (GANs).
Standards will continue to evolve as AI proliferates. For now, Google appears to be focused on content quality rather than separating human contributions from those made with machines.