Last updated on
In a recent episode of Google’s “Search Off The Record” podcast, Analyst Gary Illyes provided clarity on how Googlebot handles links during the crawling process.
Illyes’ explanation challenges the common belief that Googlebot navigates websites by following links in real-time.
He revealed that, instead of following links sequentially, Googlebot collects them for processing at a later stage.
This misconception appears to have originated from Google’s own documentation.
“It’s my pet peeve,” Illyes remarked during the podcast, referring to Google’s support pages.
He elaborated:
“On our site, we keep saying Googlebot is following links, but no, it’s not following links. It’s collecting links and then revisits those links later.”
Google’s official documentation on crawlers states:
“Crawler (sometimes also called a ‘robot’ or ‘spider’) is a generic term for any program that is used to automatically discover and scan websites by following links from one web page to another.”
The document suggests that Googlebot navigates the web by actively following links in real-time.
This discrepancy between Google’s public messaging and the actual behavior of their crawler raises questions about other potential misunderstandings within the SEO community.
This revelation could have significant implications for our understanding of Google’s crawling process:
Many SEO strategies are based on the assumption that Googlebot navigates websites by following internal links in a manner similar to a site visitor.
However, if Illyes’ description is accurate, it indicates that Googlebot’s behavior is more intricate than previously believed.
While this doesn’t undermine current SEO best practices, it underscores the importance for SEO professionals to remain updated on the subtleties of how Google operates.
Original news from SearchEngineJournal