Eli Schwartz

Our Blog

We only talk about the good stuff. Curated Collections, Freebies, well researched articles and much more

SEO

5 Lessons from Google’s Super Bowl Commercial

In this year’s Super Bowl like all the past years, Google ran a commercial. In prior years, Google highlighted search, but this year they decided to put the focus on the Google Assistant.  Based on immediate Twitter responses as well as recaps about the ads that ran this year, Google definitely hit the mark by causing an emotional response.

SEO

The Compounding Effect of SEO

In the world of investing whether at an exclusive hedge fund or a basic retirement plan, the effect of compounding growth is a bigger driver of returns than lucky timing or discovering massively undervalued opportunities. Explained simply, compounding is the idea of earning returns – even small ones – which are then added to the principle. Compounding is generating earnings from earnings.

SEO

There is no reason to fear the Google Algorithm

Every time there is a rumor of a Google algorithm update, a general panic ripples through the massive community of people who are heavily reliant on free traffic from Google’s search users. There is a collective holding of breath while the numbers are analyzed and then a sigh of relief (hopefully) of surviving the algo update unscathed.

After the update is released and especially if it’s confirmed by Google, there is a slew of articles and analyses produced attempting to dissect what it is that Google changed and how to win in the new paradigm.

In my opinion, all of this angst in entirely misplaced and is rooted in a fundamental misunderstanding of what exactly happens in a Google algorithm update. The Google algorithm is made out to be some sort of mythical secret recipe cooked up in a lab designed to simultaneously rob and reward sites at the whims of a magical all-knowing wizard. In this scenario, the goal of every SEO and webmaster is to dupe this wizard and come out on the winning side of every update.

Multiple Algorithms

Nothing could be further from the truth. Google’s algorithm isn’t even a single algorithm, rather it’s a confluence of multiple algorithms. On Google’s guide on how search works, it ALWAYS refers to the algorithm in the plural. If you were to parse the data on this page and read tweets mentioning algorithms written by Googlers it appears that there are three primary algorithms each of which has a different purpose.

  1. Crawling – this is an algorithm designed to crawl and understand the entire web.
  2. Indexing – This algorithm determines how to cache a webpage and what database tags should be used to categorize it.
  3. Ranking – Somewhat self explanatory but it uses the information in the first two algorithms to apply a ranking methodology to every page.

There is also like a fourth primary algorithm which is tasked with just understanding a user’s query and then modifying it something else when the search engine queries the database. This is the algorithm affected by Google’s announcement of BERT.

Understanding Google’s algorithms in this light it might make a lot more sense to how Google could claim to update their algorithm multiple times per day.

These algorithms are extensive and complex software programs which constantly need to be updated based on real scenarios. As anomalies are found by search engineers they are patched as a bug in any software program would be. In every other company this might just be a bug fix, but on search it translates to an algorithm update.

Product updates

In any software company where the software is the product there are product updates that happen multiple times per year. There are always changes being made, some visible and others not so much. As an example, Facebook is constantly tweaking all aspects of their product, they didn’t just launch their news feed many years ago and just leave it. Even our phone operating systems whether Android or OS are updated in a major way at least once per year.

For Google, like any other software company they release updates that take big leaps forward on their product; however, in Google’s case they are called “major algorithm updates” instead of just product updates. This phrasing alone is enough to enduce panic attacks.

Algorithms don’t hurt

Now with this knowledge of what exactly an algorithm update is, it is easier to understand why there really is never a reason to panic. When Google’s product managers determine that there are improvements to make in how the search product functions, they are usually tweaks at the margins. The updates are designed to address flaws in how users experience search. Much like a phone operating system leaps forward in a new update, Google’s major updates make significant improvements in user experiences.

If a site experiences a drop in search traffic after a major algorithm update, it is rarely because the entire site was targeted. Typically, while one collection of URL’s may be demoted in search rankings others more than likely improved.

Understanding what those leaps forward are requires taking a deep dive into Google Search Console to drill into which URL’s saw drops in traffic and others that saw gains. While a site can certainly a steep drop off after an update, its simply because they had more losers than winners, but it is most definitely not because the algorithm punished them.

In many cases, sites might not have even lost traffic – they only lost impressions that were already not converting into clicks. Looking at the most recent update where Google removes the organic listing of sites that have a featured snippet ranking, I have seen steep drops in impressions but the clicks are virtually unchanged.

Declaring a site to be a winner or loser after an update neglects the granular data that might have lead to the significant changes in traffic. It is for this reason that websites should not fear the algorithm if their primary focus is on providing an amazing and high quality experience for users. The only websites that have something to fear are those that should not have had high search visibility because of a poor user experience.

Past algorithm updates

In recent times, it is rare for a site that provides a quality experience for users – determined as satisfying a user’s query intent to have all of their URL’s demoted in an update. If that was truly the case, the site was likely benefitting from a “bug” in how Google worked and was already living on borrowed time. Websites that exploit loopholes in the way that Google ranks the web, should always be aware that Google will eventually close the loophole for the good of the entire Internet.

There were certainly times in the more distant past where entire sites were targeted by algorithm updates, but that is no longer the case. Panda which was designed to root out low quality content, Penguin which demoted unnatural links and medic which demoted incorrect medical information had specific use targets, but other sites were left relatively untouched. If a site was on the losing side of the algorithms prior to that update because competitors were exploiting loopholes, they likely saw significant gains as their competitors dropped out of search.

Updates are a fact of search life

Google will and should always continuously update their algorithms so that their product evolves into something that retains their users. If they would just leave the algorithm alone, they risk being overrun by spammers who take advantage of loopholes and Google will go the way of AOL, Excite, Yahoo and every other search engine that is no longer in existence.

Instead of chasing the algorithm, everyone that relies on search should maintain their focus on the user. The user is the penultimate customer of search and will thereby immunize their site from algorithm updates designed to protect the search experience.  There is no algorithm wizard. The algorithm(s) only have one purpose and that is to help a user find exactly what they seek.

SEO

Domain Authority Scores are NOT KPI’s

Years ago, Google used to have a publicly visible score called Page Rank that ranked the authority of a website based on backlinks. All websites began with a score of zero and as they acquired valuable backlinks the score moved up to a maximum of ten. Most well trafficked sites hovered around a five or six with exceptional sites going as high as seven or eight. Scores of nine or ten were reserved for the most authoritative sites on the Internet like Whitehouse.gov, Adobe and Google itself.

(Side note: In 2009, Google Japan was participating in a scheme that could have been construed as an attempt to build backlinks. As a result they were penalized with a public page rank penalty that dropped their score from nine to five. In reality, this had no impact on their actual rankings but it was perceived as a penalty.)

Aside from being used as an indicator of how valuable a backlink from a particular site might be, the score was utterly useless. Apparently, Google realized that showing visible page rank was facilitating a link acquisition economy that they did not want to exist, so they deprecated the visible aspect of page rank. (Note: Google’s actual ranking algorithm still uses “page rank” they just don’t share a score.)

When Google stopped sharing page rank, other tools that calculated web authority stepped into the void. Moz’s Domain Authority immediately became a popular way of valuing link acquisition efforts, although there were alternatives from SEMRush and Majestic.

Today there are many options for valuing links and any tool that crawls the web will attempt to quantify the value of a website’s inbound links in a single metric. The tools calculate the scores using a methodology that is similar to how Google’s patents claim Google computes this. Each tool will value all links into a site and then compute the value of each of those and so on to get a total score.

As the explanation on each of these sites will elaborate, none of these calculations are used by Google in their rankings and there is no proven impact from the scores on actual rankings. Unlike these vanity metrics, Google computes a score (or multiple scores) in real time with information only available to Google. Google knows which sites are recently hacked and should be excluded, which sites have a pattern of link manipulation and most of all their AI is processing all of the internet at scale.

A site with an authority score (from any tool) of zero can still rank on queries if it is determined to be the most relevant, and a site with a score of one-hundred will not rank on a query where it is not relevant.

In actuality, these scores are nice to know, but are vanity metrics just like rankings. If you are in the link selling business, a higher score will help generate more sales, but otherwise the score will not do much.

Domain Authority is not a goal

Too often, I have heard of SEO teams where one of the goals for the year was to increase their score in one of these tools. This is a futile effort for many reasons, but it also takes their focus away from what is truly important. Even acquiring many valuable links may not influence the score, so there is no way to guarantee that this goal will be achieved.

However, even if the team is able to increase the score by a few notches it will not increase revenue or even organic traffic which is how the SEO team should really be measured. (Note: there have been many correlation studies which indicate a potential relationship between higher scores and more rankings, but these are not causation studies.

What should be the goal

Instead, the SEO team should focus on efforts that increase organic conversions such as creating great content, building a technically sound site and satisfying user intent. Content that is high quality, relevant and helpful will also attract links which might then lift a score. Focusing on the content as the end itself rather than content as a means to an end of a higher authority score is closer to any revenue goal.

If a team is doing this, their SEO efforts will certainly pay off in organic conversions and traffic even if their domain scores never hit some arbitrary magic number. SEO is an acquisition channel that must impact the bottom line and not just vanity metrics.

SEO

How to use the Wayback Machine for SEO

In 2001, a nonprofit named the Internet Archive launched a new tool called the Wayback Machine on the URL: archive.org.

The mission of the Internet Archive was to build a digital library of the Internet’s history, much the same way paper copies of newspapers are saved in perpetuity.

Because webpages are constantly changing, the Wayback Machine crawlers frequently visit and cache pages for the archive.

Their goal was to make this content available for future generations of researchers, historians, and scholars. But this data is just as valuable to marketers and SEO professionals.

Whenever I am working on a project that involves a steep change in traffic either for my core site or a competitors, one of the first places I will look the cached pages before and after the changes in traffic.

Even if you aren’t doing forensic analysis on a site, just having access to a site’s changelog can be a valuable tool.

You can find old content or even recall a promotion that was run in the previous year.

Troubleshooting with the Wayback Machine

Much like looking at a live website, the cached pages will have all the information available that might explain a shift in traffic.

The entire website, with all HTML included, is contained within the cache, which makes it fairly simple to identify obvious structural or technical changes.

In comparing the differences between a before and after image of my site or a competitor’s, I look for issues with:

  • On-page meta.
  • Internal linking.
  • Image usages.
  • And even any dynamic portions of the page that might have been added or removed.

Here are the steps to use the Wayback Machine for troubleshooting.

1. Put your URL into the search box of Archive.org

This does not need to be a homepage. It can be any URL on the site.

How to Use Archived Versions of Websites for SEO Troubleshooting

2. Choose a date where you believe the code may have changed

Note the color coding of the dates:

  • Red means there was an error.
  • Green indicates a redirect happens.
  • Blue means there was a good cache of the page.
How to Use Archived Versions of Websites for SEO Troubleshooting

You may have to continue picking dates and then digging through each version until you find something interesting worth looking at further.

For larger sites, you will find that homepages are cached multiple times per day while other sites will only be cached a few times per year

3. The cached page from archive.org will load in your browser like any website except that it will have a header from Archive.org

Look for obvious changes in structure and content that may have lead to a change in search visibility.

4. Open the source code of the page and search for:

  • Title
  • Description
  • Robots
  • Canonicals
  • JavaScript

5. Compare anything that is different from the current site and analyze causal or correlative relationships

No detail is too small to be investigated. Look at things like cross-links, words used on pages, and even for evidence that a site may have been hacked during a particular time period.

You should even look at the specific language in any calls to action as a change here might impact conversions even if traffic now is higher than the time of the Wayback Machine’s cache.

Robots File Troubleshooting

The Wayback Machine even retains snapshots of robots.txt files so if there was a change in crawling permissions the evidence is readily available.

This feature has been amazingly useful for me when sites seem to have dropped out of the index mysteriously with no obvious penalty, spam attack, or a presently visible issue with a robots.txt file.

To find the robots file history just drop the robots URL into the search box like this

How to Use Archived Versions of Websites for SEO Troubleshooting

After that choose a date and then do a diff analysis between the current robots file. There are a number of free tools online which allow for comparisons between two different sets of text.

Backlink Research

An additional less obvious use case for the Wayback Machine is to identify how competitors may have built backlinks in the past.

Using a tool like Ahrefs I have looked at the “lost” links of a website and then put them into the Wayback Machine to see how they used to link to a target website.

A natural link shouldn’t really get “lost” and this is a great way to see why the links might have disappeared.

Gray Hat Uses

Aside from these incredibly useful ways to use the Wayback Machine to troubleshoot SEO issues, there are also some seedier ways that some use this data.

For those that are building private blog networks (PBNs) for backlink purposes, the archived site is a great way to restore the content of a recently purchased expired domain.

The restored site is then filled with links to other sites in the network.

Affiliates

One other way, again from the darker side of things, that people have used this restored content is to turn it into an affiliate site for that category.

For example, if someone bought an expired domain for a bank, they would restore the content and then place CTAs all over the site to fill out a mortgage form.

The customer might think they were getting in touch with a bank. However, in reality, their contact info is being auctioned off to a variety of mortgage brokers.

Not to end on a dark note, there is one final amazing way to use the Wayback Machine and it is the one intended by the creators of the site.

This is the archive of everything on the web, and if someone was researching Amazon’s atmospheric growth over the last two decades through the progression of their website, this is where they would find an image of what Amazon’s early and every subsequent homepage looked like.

How to Use Archived Versions of Websites for SEO Troubleshooting

Shady use cases aside, the Wayback Machine is one of the best free tools you can have in your digital marketing arsenal. There is simply no other tool that has 18 years of history of nearly every website in the world.


SEO

HTML Sitemaps for SEO – 7 reasons you need one

A sitemap guides your website visitors to where they want to go. It’s where they turn if they haven’t found what they are looking from those dropdown menus.

Beyond helping your visitors navigate your website, which should be the primary focus of any marketing effort, there are many other reasons to use a sitemap.

First, it’s important to understand that there are two types of sitemaps:

  • XML sitemaps
  • HTML sitemaps

What Are XML Sitemaps?

XML sitemaps help search engines and spiders discover the pages on your website.

These sitemaps give search engines a website’s URLs and offer data a complete map of all pages on a site. This helps search engines prioritize pages that they will crawl.

There is information within the sitemap that shows page change frequency on one URL versus others on that website, but it is unlikely that this has any effect on rankings.

An XML sitemap is very useful for large websites that might otherwise take a long time for a spider to crawl through the site.

Every site has a specific amount of crawl budget allocated to their site, so no search engine will simply crawl every URL the first time it encounters it.

An XML sitemap is a good way for a search engine to build its queue of the pages it wants to serve.

What Are HTML Sitemaps?

HTML sitemaps ostensibly serve website visitors. The sitemaps include every page on the website – from the main pages to lower-level pages.

An HTML sitemap is just a clickable list of pages on a website. In its rawest form, it can be an unordered list of every page on a site – but don’t do that.

This is a great opportunity to create some order out of chaos, so it’s worth making the effort.

Why You Should Leverage HTML Sitemaps

While you may already use an XML sitemap – and some insist that an HTML sitemap is no longer necessary – here are seven reasons to add (or keep) an HTML sitemap.

1. Organize Large Websites

Your website will grow in size.

You may add an ecommerce store with several departments or you may expand your product portfolio. Or, more likely, the site just grow as new people are added to a company.

However, this can lead to confusion for visitors who are then confused about where to go or what you have to offer.

The HTML sitemap works in a similar way to a department store or shopping mall map.

The sitemap is a great way for the person maintaining the sitemap to take stock of every page and make sure it has its rightful home somewhere in the site.

This is the directory for users that can’t find the pages they are looking for elsewhere on the site and, as a last resort, this should help them get there.

2. Serve as a Project Manager & Architect

Think of the HTML sitemap as an architectural blueprint for your website.

The sitemap becomes a project management tool. It oversees the structure and connections between pages and subpages.

It’s also a forcing function to make sure that you have a clean hierarchy and taxonomy for the site.

A good sitemap is like a well-organized daily schedule.

As any busy person knows, there’s a big difference between an agenda that has every meeting popped on at random or those that are themed and organized around time blocks.

In either case, an agenda is still an agenda but an organized one is far more useful for everyone.

3. Highlight the Website’s Purpose

As a content-based document, the HTML sitemap serves as a way to further define your website’s specific value.

Enhance this benefit by using SEO to identify the most unique and relevant keywords to include on the sitemap.

Anchor text is a great way of creating keyword relevancy for a page and for pages without many cross-links, a sitemap is an easy alternative to use choice anchor text.

To understand the power of anchor text alone, look at the search results for the query “click here”:

7 Reasons Why an HTML Sitemap Is a Must-Have

4. Speed the Work of Search Engine Crawlers

You want to help those search engines out in any way you can and take control where you can. The assistance includes finding your content and moving it up in the crawl queue.

While an XML sitemap is just a laundry list of links, HTML links are actually the way search crawlers prefer to discover the web

The HTML sitemap helps call attention to that content by putting the spotlight on your website’s most important pages. You can also submit the text version of your sitemap to Google.

5. Increase Search Engine Visibility

With some websites, Google and other search engines may not go through the work of indexing every webpage.

For example, if you have a link on one of your webpages, then search bots may choose to follow that link.

The bots want to verify that the link makes sense. Yet, in doing so, the bots may never return to continue indexing the remaining pages.

The HTML sitemap can direct these bots to get the entire picture of your site and consider all the pages. In turn, this can facilitate the bots’ job and they may stay longer to follow the page navigation laid out for them.

Not only does a taxonomy and hierarchy help users find themselves, but it’s incredibly important for search crawlers, too. The sitemap can help the crawlers understand the website’s taxonomy.

There is no limit to how big a sitemap can be and LinkedIn even has a sitemap which has links to all of their millions of user pages.

7 Reasons Why an HTML Sitemap Is a Must-Have

6. Enable Page Links in a Natural Way to Drive Visitors

Not every page will connect through a link located in a header or footer.

The HTML sitemap can step in and find these ideal connections that address how visitors may look for things.

In this way, the HTML sitemap can reflect a visitor’s journey and guide them from research to purchase. In doing so, this benefit of HTML sitemaps can raise the organic search visibility of these linked pages.

In this instance, the sitemap is the fallback that ensures that there is never a page on a site that is orphaned.

I have seen huge gains in the traffic of sites that had issues with deeper pages not receiving many internal links.

7 Reasons Why an HTML Sitemap Is a Must-Have

7. Identify the Areas Where Site Navigation Could Improve

Once your website grows and you develop more pages, there may be duplicate data, which can be problematic for a search engine.

But, after mapping everything out, you’ll be able to use the sitemap to find the duplication and remove it.

As an aside, this only works if there is an owner of the sitemap that is looking at the sitemap on a semi-regular basis.

Also, when you apply analytics or heat map tools, it may conclude that more visitors are using the HTML sitemap than use navigation.

This is a clear signal that you need to reassess why this is happening if the current navigation is missing the mark.

It’s important to determine how you can change the site architecture to make it easier for visitors to find what they need.

For all these benefits, you’ll want to maintain an HTML sitemap. These benefits save resources (time and money). They also deliver an effective way to guide your website visitors to what they need and help close those sales.

Getting Started

If you don’t have an HTML sitemap but do use a platform like WordPress, I recommend one of the many sitemap plug-ins. The plug-ins automate much of the sitemap development and management process.

For larger sites, it might take running a web crawl like:

The output of this web crawl should then serve as the basis for organizing all of a site’s pages around themes.

After developing the HTML sitemap, don’t forget to put a link on your website that is easy to find.

You can either put the link at the top, as part of a sidebar or in a footer menu that continues to be accessible as visitors move from page to page.

However you look at it, an HTML sitemap is an easy way to get huge benefits without a lot of effort.

1 2 3 4 14 15