Last updated on
Numerous individuals hold the belief that Google favors prominent brands and prioritizes low-quality content, a trend that seems to be worsening over time. This perception is widely shared, with many citing personal experiences of encountering subpar search results. The underlying reasons behind this phenomenon are rather intriguing and not merely a matter of perception.
This isn’t the first instance where Google’s search engine results pages (SERPs) exhibited a bias towards favoring websites associated with big brands. In the early stages of Google’s algorithm development, it was evident that sites with high PageRank could essentially rank for almost any keyword they desired.
For instance, I recall a web design company that constructed numerous websites, establishing a network of backlinks that significantly boosted their PageRank to levels typically only seen in major corporate entities like IBM. Consequently, they secured top rankings for competitive keyword phrases such as “Web Design” and its various permutations, including “Web Design + [any state in the USA].”
It was widely understood that websites boasting a PageRank of 10, the highest level displayed on Google’s toolbar, enjoyed a considerable advantage in the SERPs, often surpassing more relevant webpages. The discrepancy did not escape notice, prompting Google to eventually refine its algorithm to address this issue.
This anecdote underscores an example of how Google’s algorithm inadvertently fostered a bias in favor of large brands.
Throughout Google’s history, two recurring themes have persisted: the prevalence of low-quality content and the dominance of big brands over small, independent publishers. This has been particularly noticeable in certain types of searches, such as recipe queries.
For instance, anyone who has searched for a recipe can attest to the tendency for the most generic or common recipes to rank highest, often at the expense of quality. Take, for example, a search for “cream of chicken soup,” where nearly every top-ranked recipe calls for just two cans of chicken soup as the main ingredient.
Similarly, a search for “Authentic Mexican Tacos” might yield recipes featuring ingredients such as soy sauce, ground beef, “cooked chicken,” store-bought taco shells, and beer – a far cry from what one might expect in terms of authenticity.
Not all search engine results pages (SERPs) for recipes are of poor quality, but some of the more general recipes that Google ranks can be exceedingly basic, to the point where even someone with minimal cooking experience could prepare them on a hotplate.
Robin Donovan, a cookbook author and online recipe blogger, pointed out a significant issue with Google’s recipe search rankings post-HCU (likely referring to a Google algorithm update or change):
“The biggest problem is that you get a bunch of Reddit threads or sites with untested user-generated recipes, or scraper sites that are stealing recipes from hardworking bloggers.”
In essence, the content that surfaces in these search results is far from helpful, especially for individuals seeking tested and well-written recipes that they can trust to produce delicious results.
It’s challenging to overlook the perception that Google’s rankings consistently favor big brand websites and lower-quality webpages across a spectrum of topics.
The trajectory often sees small sites evolving into dominant big brands that monopolize the Search Engine Results Pages (SERPs). Yet, even as a small site gains prominence, it merely transitions into yet another big brand overshadowing the SERPs.
Common explanations for subpar SERPs include:
So, what exactly is the underlying dynamic at play?
The recent Google anti-trust lawsuit has shed light on the significance of the Navboost algorithm signals as a pivotal ranking factor. Navboost, an algorithm designed to analyze user engagement signals, plays a crucial role in deciphering the relevance of webpages to particular topics, among other functions.
The concept of leveraging engagement signals to gauge user expectations seems logical. After all, Google prioritizes user experience, and who better to determine what’s beneficial for users than the users themselves, right?
However, let’s consider an intriguing example: the notable song of 1991, “Smells Like Teen Spirit” by Nirvana, didn’t make it to the Billboard top 100 for that year. Surprisingly, artists like Michael Bolton and Rod Stewart managed to secure multiple spots on the list, with Rod Stewart claiming the top rank for a track titled “The Motown Song” (a somewhat obscure piece, anyone recall it?).
It wasn’t until the following year that Nirvana finally broke into the charts…
In my view, considering the significant influence of user interactions on search rankings, it’s plausible to suggest that Google’s algorithms might reflect a similar pattern linked to users’ biases.
One prevalent bias that could come into play is the Familiarity Bias. This bias occurs when people tend to favor familiar options over unfamiliar ones, even if the unfamiliar ones might be objectively better. This preference for the familiar often manifests in consumer behavior, such as brand loyalty.
Behavioral scientist Jason Hreha aptly defines Familiarity Bias as follows: “The familiarity bias is a phenomenon in which people tend to prefer familiar options over unfamiliar ones, even when the unfamiliar options may be better. This bias is often explained in terms of cognitive ease, which is the feeling of fluency or ease that people experience when they are processing familiar information. When people encounter familiar options, they are more likely to experience cognitive ease, which can make those options seem more appealing.”
With the exception of specific queries, like those related to health where editorial discretion may be more evident, I don’t believe Google intentionally favors certain types of websites, such as brands, based solely on editorial decisions. Instead, it’s plausible that the algorithms may inadvertently amplify the Familiarity Bias inherent in user behavior.
Google relies on numerous factors to determine rankings, with a strong emphasis on user experience. It’s plausible that user preferences carry more weight than signals from review systems. This might explain why prominent brand websites, even with questionable reviews, often outrank honest independent review platforms.
Historically, Google’s algorithms have occasionally produced subpar search results. For instance, the Panda algorithm aimed to eliminate favoritism towards generic content, while the Reviews System attempts to address biases toward content tangentially related to reviews.
Despite mechanisms in place to identify and filter out low-quality sites, prominent brands and subpar content can still achieve high rankings. This raises questions about the efficacy of Google’s systems in consistently delivering relevant and reliable search results.
It’s possible that users’ preferences, as indicated by their interaction signals, drive the ranking of these sites. The critical question arises: will Google persist in prioritizing content that aligns with user biases and preferences, even if it sacrifices quality?
There’s a dilemma: should Google prioritize quality content, potentially risking user satisfaction if it’s deemed too complex, or should it cater to the lowest common denominator, akin to how mainstream pop stars tailor their content to wide audiences?
This dilemma raises broader questions about the balance between serving user preferences and promoting quality content.
Original news from SearchEngineJournal