In an SEO Office Hours podcast, John Mueller from Google was asked whether blocking the crawl of a webpage would nullify the “linking power” of internal or external links. His response provided an unexpected perspective and revealed some insights into how Google Search handles this and similar scenarios internally.
About The Power Of Links
There are many ways to think about links, but regarding internal links, Google consistently emphasizes their use to indicate which pages are the most important. Google hasn’t recently released any patents or research papers detailing how they use external links to rank web pages, so most of what SEOs know about external links is based on potentially outdated information.
John Mueller’s comments do not significantly expand our understanding of how Google uses inbound or internal links. However, they do present a different perspective that, in my opinion, is more useful than it initially seems.
Impact On Links From Blocking Indexing
The person asking the question wanted to know if blocking Google from crawling a webpage affects how internal and inbound links are used by Google.
This was the question:
“Does blocking crawl or indexing on a URL cancel the linking power from external and internal links?”
Mueller suggests answering the question by considering how a user would react to it, which provides an intriguing insight.
He responded:
“I’d look at it like a user would. If a page is not available to them, then they wouldn’t be able to do anything with it, and so any links on that page would be somewhat irrelevant.”
This response aligns with what we know about the relationship between crawling, indexing, and links. If Google can’t crawl a link, it won’t see the link, and therefore the link will have no effect.
Keyword Versus User-Based Perspective On Links
Mueller’s suggestion to view the issue from a user’s perspective is interesting because it’s not the usual approach to link-related questions. However, it makes sense: if a user is blocked from seeing a webpage, they also can’t see the links on it.
What about external links? A long time ago, I saw a paid link for a printer ink website on a marine biology webpage about octopus ink. Link builders back then believed that if a webpage contained keywords matching the target page (like “octopus ink” and “printer ink”), Google would use that link to rank the page due to its “relevance.”
As silly as that seems today, many people believed in this keyword-based approach to understanding links. In contrast, John Mueller’s user-based perspective simplifies understanding links and likely aligns more closely with how Google ranks them, compared to the outdated keyword-based approach.
Optimize Links By Making Them Crawlable
Mueller continued his answer by emphasizing the importance of making pages discoverable through links.
He explained:
“If you want a page to be easily discovered, make sure it’s linked from pages that are indexable and relevant within your website. It’s fine to block indexing of pages you don’t want discovered—that’s ultimately your decision—but if an important part of your website is only linked from the blocked page, it will make search much harder.”
About Crawl Blocking
A final word about blocking search engines from crawling web pages: a surprisingly common mistake some site owners make is using the robots meta directive to tell Google not to index a webpage but still crawl the links on it.
The erroneous directive looks like this:
<meta name=”robots” content=”noindex” <meta name=”robots” content=”noindex” “follow”>
There is a lot of misinformation online recommending this meta description, which even appears in Google’s AI Overviews.
Screenshot Of AI Overviews
Of course, the above robots directive does not work. As Mueller explains, if a person (or search engine) can’t see a webpage, they can’t follow the links on that page.
Additionally, while there is a “nofollow” directive to make a search engine crawler ignore links on a webpage, there is no “follow” directive to force a search engine to crawl all the links. Following links is the default behavior, and search engines decide this for themselves.
Original news from SearchEngineJournal