Back

Google Advises Caution With AI Generated Answers

Last updated on

Gary Illyes from Google warned about relying too heavily on Large Language Models (LLMs), emphasizing the need to verify information against authoritative sources before accepting LLM-generated answers. Although he provided this advice in response to a question, he didn’t disclose the specifics of the question asked.

LLM Answer Engines

Gary Illyes emphasized the importance of validating information when using AI to answer queries. His caution comes amidst OpenAI’s testing of SearchGPT, an AI search engine prototype. While his advice might coincidentally align with this announcement, it may not be directly related.

Gary first discussed how Large Language Models (LLMs) generate responses by selecting words, phrases, and sentences that fit the context of a prompt. He mentioned that a technique called “grounding,” which connects LLMs to a database of facts, knowledge, and web pages, can enhance the accuracy of their answers. However, grounding isn’t foolproof, and mistakes can still occur.

Here’s what Gary shared:

“LLMs generate responses by finding words, phrases, and sentences that fit the context and meaning of a prompt. While this helps in producing relevant and coherent answers, it doesn’t guarantee factual correctness. As a user of these LLMs, you need to verify the responses based on your own knowledge or by consulting authoritative resources on the topic.

Grounding can improve accuracy, but it’s not perfect and doesn’t replace human judgment. The internet is rife with both intentional and unintentional misinformation, so you wouldn’t believe everything you read online—why would you do so with LLM responses?

Of course, this post is online too, and I might be an LLM myself. Do what you will with that.”

AI Generated Content And Answers

Gary’s LinkedIn post serves as a reminder that while LLMs produce answers that are contextually relevant to the questions posed, this relevance doesn’t always equate to factual accuracy.

For Google, the authoritativeness and trustworthiness of content are crucial for ranking. Consequently, it’s in publishers’ best interest to consistently fact-check their content, particularly AI-generated material, to maintain their credibility. The same principle of verifying facts applies to anyone using generative AI for answers.

Original news from SearchEngineJournal