Eli Schwartz

Blog – Page 3 – đź“ŚEli Schwartz

Our Blog

We only talk about the good stuff. Curated Collections, Freebies, well researched articles and much more


Marketers should use strong project code names – even for core parts of the job

In the military, operations are typically given a code name to align everyone to the mission and keep the actual objective secret. As the practice took hold in World War I and II, the naming convention of missions began to take on names that connoted strength. Winston Churchill had a role in personally picking the names of missions and even set out guidelines that encouraged planners to use names of heroes of antiquity, figures from Greek and Roman mythology, and names of war heroes.


5 Lessons from Google’s Super Bowl Commercial

In this year’s Super Bowl like all the past years, Google ran a commercial. In prior years, Google highlighted search, but this year they decided to put the focus on the Google Assistant.  Based on immediate Twitter responses as well as recaps about the ads that ran this year, Google definitely hit the mark by causing an emotional response.


The Compounding Effect of SEO

In the world of investing whether at an exclusive hedge fund or a basic retirement plan, the effect of compounding growth is a bigger driver of returns than lucky timing or discovering massively undervalued opportunities. Explained simply, compounding is the idea of earning returns – even small ones – which are then added to the principle. Compounding is generating earnings from earnings.


There is no reason to fear the Google Algorithm

Every time there is a rumor of a Google algorithm update, a general panic ripples through the massive community of people who are heavily reliant on free traffic from Google’s search users. There is a collective holding of breath while the numbers are analyzed and then a sigh of relief (hopefully) of surviving the algo update unscathed.

After the update is released and especially if it’s confirmed by Google, there is a slew of articles and analyses produced attempting to dissect what it is that Google changed and how to win in the new paradigm.

In my opinion, all of this angst in entirely misplaced and is rooted in a fundamental misunderstanding of what exactly happens in a Google algorithm update. The Google algorithm is made out to be some sort of mythical secret recipe cooked up in a lab designed to simultaneously rob and reward sites at the whims of a magical all-knowing wizard. In this scenario, the goal of every SEO and webmaster is to dupe this wizard and come out on the winning side of every update.

Multiple Algorithms

Nothing could be further from the truth. Google’s algorithm isn’t even a single algorithm, rather it’s a confluence of multiple algorithms. On Google’s guide on how search works, it ALWAYS refers to the algorithm in the plural. If you were to parse the data on this page and read tweets mentioning algorithms written by Googlers it appears that there are three primary algorithms each of which has a different purpose.

  1. Crawling – this is an algorithm designed to crawl and understand the entire web.
  2. Indexing – This algorithm determines how to cache a webpage and what database tags should be used to categorize it.
  3. Ranking – Somewhat self explanatory but it uses the information in the first two algorithms to apply a ranking methodology to every page.

There is also like a fourth primary algorithm which is tasked with just understanding a user’s query and then modifying it something else when the search engine queries the database. This is the algorithm affected by Google’s announcement of BERT.

Understanding Google’s algorithms in this light it might make a lot more sense to how Google could claim to update their algorithm multiple times per day.

These algorithms are extensive and complex software programs which constantly need to be updated based on real scenarios. As anomalies are found by search engineers they are patched as a bug in any software program would be. In every other company this might just be a bug fix, but on search it translates to an algorithm update.

Product updates

In any software company where the software is the product there are product updates that happen multiple times per year. There are always changes being made, some visible and others not so much. As an example, Facebook is constantly tweaking all aspects of their product, they didn’t just launch their news feed many years ago and just leave it. Even our phone operating systems whether Android or OS are updated in a major way at least once per year.

For Google, like any other software company they release updates that take big leaps forward on their product; however, in Google’s case they are called “major algorithm updates” instead of just product updates. This phrasing alone is enough to enduce panic attacks.

Algorithms don’t hurt

Now with this knowledge of what exactly an algorithm update is, it is easier to understand why there really is never a reason to panic. When Google’s product managers determine that there are improvements to make in how the search product functions, they are usually tweaks at the margins. The updates are designed to address flaws in how users experience search. Much like a phone operating system leaps forward in a new update, Google’s major updates make significant improvements in user experiences.

If a site experiences a drop in search traffic after a major algorithm update, it is rarely because the entire site was targeted. Typically, while one collection of URL’s may be demoted in search rankings others more than likely improved.

Understanding what those leaps forward are requires taking a deep dive into Google Search Console to drill into which URL’s saw drops in traffic and others that saw gains. While a site can certainly a steep drop off after an update, its simply because they had more losers than winners, but it is most definitely not because the algorithm punished them.

In many cases, sites might not have even lost traffic – they only lost impressions that were already not converting into clicks. Looking at the most recent update where Google removes the organic listing of sites that have a featured snippet ranking, I have seen steep drops in impressions but the clicks are virtually unchanged.

Declaring a site to be a winner or loser after an update neglects the granular data that might have lead to the significant changes in traffic. It is for this reason that websites should not fear the algorithm if their primary focus is on providing an amazing and high quality experience for users. The only websites that have something to fear are those that should not have had high search visibility because of a poor user experience.

Past algorithm updates

In recent times, it is rare for a site that provides a quality experience for users – determined as satisfying a user’s query intent to have all of their URL’s demoted in an update. If that was truly the case, the site was likely benefitting from a “bug” in how Google worked and was already living on borrowed time. Websites that exploit loopholes in the way that Google ranks the web, should always be aware that Google will eventually close the loophole for the good of the entire Internet.

There were certainly times in the more distant past where entire sites were targeted by algorithm updates, but that is no longer the case. Panda which was designed to root out low quality content, Penguin which demoted unnatural links and medic which demoted incorrect medical information had specific use targets, but other sites were left relatively untouched. If a site was on the losing side of the algorithms prior to that update because competitors were exploiting loopholes, they likely saw significant gains as their competitors dropped out of search.

Updates are a fact of search life

Google will and should always continuously update their algorithms so that their product evolves into something that retains their users. If they would just leave the algorithm alone, they risk being overrun by spammers who take advantage of loopholes and Google will go the way of AOL, Excite, Yahoo and every other search engine that is no longer in existence.

Instead of chasing the algorithm, everyone that relies on search should maintain their focus on the user. The user is the penultimate customer of search and will thereby immunize their site from algorithm updates designed to protect the search experience.  There is no algorithm wizard. The algorithm(s) only have one purpose and that is to help a user find exactly what they seek.


Domain Authority Scores are NOT KPI’s

Years ago, Google used to have a publicly visible score called Page Rank that ranked the authority of a website based on backlinks. All websites began with a score of zero and as they acquired valuable backlinks the score moved up to a maximum of ten. Most well trafficked sites hovered around a five or six with exceptional sites going as high as seven or eight. Scores of nine or ten were reserved for the most authoritative sites on the Internet like Whitehouse.gov, Adobe and Google itself.

(Side note: In 2009, Google Japan was participating in a scheme that could have been construed as an attempt to build backlinks. As a result they were penalized with a public page rank penalty that dropped their score from nine to five. In reality, this had no impact on their actual rankings but it was perceived as a penalty.)

Aside from being used as an indicator of how valuable a backlink from a particular site might be, the score was utterly useless. Apparently, Google realized that showing visible page rank was facilitating a link acquisition economy that they did not want to exist, so they deprecated the visible aspect of page rank. (Note: Google’s actual ranking algorithm still uses “page rank” they just don’t share a score.)

When Google stopped sharing page rank, other tools that calculated web authority stepped into the void. Moz’s Domain Authority immediately became a popular way of valuing link acquisition efforts, although there were alternatives from SEMRush and Majestic.

Today there are many options for valuing links and any tool that crawls the web will attempt to quantify the value of a website’s inbound links in a single metric. The tools calculate the scores using a methodology that is similar to how Google’s patents claim Google computes this. Each tool will value all links into a site and then compute the value of each of those and so on to get a total score.

As the explanation on each of these sites will elaborate, none of these calculations are used by Google in their rankings and there is no proven impact from the scores on actual rankings. Unlike these vanity metrics, Google computes a score (or multiple scores) in real time with information only available to Google. Google knows which sites are recently hacked and should be excluded, which sites have a pattern of link manipulation and most of all their AI is processing all of the internet at scale.

A site with an authority score (from any tool) of zero can still rank on queries if it is determined to be the most relevant, and a site with a score of one-hundred will not rank on a query where it is not relevant.

In actuality, these scores are nice to know, but are vanity metrics just like rankings. If you are in the link selling business, a higher score will help generate more sales, but otherwise the score will not do much.

Domain Authority is not a goal

Too often, I have heard of SEO teams where one of the goals for the year was to increase their score in one of these tools. This is a futile effort for many reasons, but it also takes their focus away from what is truly important. Even acquiring many valuable links may not influence the score, so there is no way to guarantee that this goal will be achieved.

However, even if the team is able to increase the score by a few notches it will not increase revenue or even organic traffic which is how the SEO team should really be measured. (Note: there have been many correlation studies which indicate a potential relationship between higher scores and more rankings, but these are not causation studies.

What should be the goal

Instead, the SEO team should focus on efforts that increase organic conversions such as creating great content, building a technically sound site and satisfying user intent. Content that is high quality, relevant and helpful will also attract links which might then lift a score. Focusing on the content as the end itself rather than content as a means to an end of a higher authority score is closer to any revenue goal.

If a team is doing this, their SEO efforts will certainly pay off in organic conversions and traffic even if their domain scores never hit some arbitrary magic number. SEO is an acquisition channel that must impact the bottom line and not just vanity metrics.


How to use the Wayback Machine for SEO

In 2001, a nonprofit named the Internet Archive launched a new tool called the Wayback Machine on the URL: archive.org.

The mission of the Internet Archive was to build a digital library of the Internet’s history, much the same way paper copies of newspapers are saved in perpetuity.

Because webpages are constantly changing, the Wayback Machine crawlers frequently visit and cache pages for the archive.

Their goal was to make this content available for future generations of researchers, historians, and scholars. But this data is just as valuable to marketers and SEO professionals.

Whenever I am working on a project that involves a steep change in traffic either for my core site or a competitors, one of the first places I will look the cached pages before and after the changes in traffic.

Even if you aren’t doing forensic analysis on a site, just having access to a site’s changelog can be a valuable tool.

You can find old content or even recall a promotion that was run in the previous year.

Troubleshooting with the Wayback Machine

Much like looking at a live website, the cached pages will have all the information available that might explain a shift in traffic.

The entire website, with all HTML included, is contained within the cache, which makes it fairly simple to identify obvious structural or technical changes.

In comparing the differences between a before and after image of my site or a competitor’s, I look for issues with:

  • On-page meta.
  • Internal linking.
  • Image usages.
  • And even any dynamic portions of the page that might have been added or removed.

Here are the steps to use the Wayback Machine for troubleshooting.

1. Put your URL into the search box of Archive.org

This does not need to be a homepage. It can be any URL on the site.

How to Use Archived Versions of Websites for SEO Troubleshooting

2. Choose a date where you believe the code may have changed

Note the color coding of the dates:

  • Red means there was an error.
  • Green indicates a redirect happens.
  • Blue means there was a good cache of the page.
How to Use Archived Versions of Websites for SEO Troubleshooting

You may have to continue picking dates and then digging through each version until you find something interesting worth looking at further.

For larger sites, you will find that homepages are cached multiple times per day while other sites will only be cached a few times per year

3. The cached page from archive.org will load in your browser like any website except that it will have a header from Archive.org

Look for obvious changes in structure and content that may have lead to a change in search visibility.

4. Open the source code of the page and search for:

  • Title
  • Description
  • Robots
  • Canonicals
  • JavaScript

5. Compare anything that is different from the current site and analyze causal or correlative relationships

No detail is too small to be investigated. Look at things like cross-links, words used on pages, and even for evidence that a site may have been hacked during a particular time period.

You should even look at the specific language in any calls to action as a change here might impact conversions even if traffic now is higher than the time of the Wayback Machine’s cache.

Robots File Troubleshooting

The Wayback Machine even retains snapshots of robots.txt files so if there was a change in crawling permissions the evidence is readily available.

This feature has been amazingly useful for me when sites seem to have dropped out of the index mysteriously with no obvious penalty, spam attack, or a presently visible issue with a robots.txt file.

To find the robots file history just drop the robots URL into the search box like this

How to Use Archived Versions of Websites for SEO Troubleshooting

After that choose a date and then do a diff analysis between the current robots file. There are a number of free tools online which allow for comparisons between two different sets of text.

Backlink Research

An additional less obvious use case for the Wayback Machine is to identify how competitors may have built backlinks in the past.

Using a tool like Ahrefs I have looked at the “lost” links of a website and then put them into the Wayback Machine to see how they used to link to a target website.

A natural link shouldn’t really get “lost” and this is a great way to see why the links might have disappeared.

Gray Hat Uses

Aside from these incredibly useful ways to use the Wayback Machine to troubleshoot SEO issues, there are also some seedier ways that some use this data.

For those that are building private blog networks (PBNs) for backlink purposes, the archived site is a great way to restore the content of a recently purchased expired domain.

The restored site is then filled with links to other sites in the network.


One other way, again from the darker side of things, that people have used this restored content is to turn it into an affiliate site for that category.

For example, if someone bought an expired domain for a bank, they would restore the content and then place CTAs all over the site to fill out a mortgage form.

The customer might think they were getting in touch with a bank. However, in reality, their contact info is being auctioned off to a variety of mortgage brokers.

Not to end on a dark note, there is one final amazing way to use the Wayback Machine and it is the one intended by the creators of the site.

This is the archive of everything on the web, and if someone was researching Amazon’s atmospheric growth over the last two decades through the progression of their website, this is where they would find an image of what Amazon’s early and every subsequent homepage looked like.

How to Use Archived Versions of Websites for SEO Troubleshooting

Shady use cases aside, the Wayback Machine is one of the best free tools you can have in your digital marketing arsenal. There is simply no other tool that has 18 years of history of nearly every website in the world.

1 2 3 4 5 15 16