Eli Schwartz

Posts by

Eli

Home / Blog Archive
SEO

The Compounding Effect of SEO

In the world of investing whether at an exclusive hedge fund or a basic retirement plan, the effect of compounding growth is a bigger driver of returns than lucky timing or discovering massively undervalued opportunities. Explained simply, compounding is the idea of earning returns – even small ones – which are then added to the principle. Compounding is generating earnings from earnings.

SEO

There is no reason to fear the Google Algorithm

Every time there is a rumor of a Google algorithm update, a general panic ripples through the massive community of people who are heavily reliant on free traffic from Google’s search users. There is a collective holding of breath while the numbers are analyzed and then a sigh of relief (hopefully) of surviving the algo update unscathed.

After the update is released and especially if it’s confirmed by Google, there is a slew of articles and analyses produced attempting to dissect what it is that Google changed and how to win in the new paradigm.

In my opinion, all of this angst in entirely misplaced and is rooted in a fundamental misunderstanding of what exactly happens in a Google algorithm update. The Google algorithm is made out to be some sort of mythical secret recipe cooked up in a lab designed to simultaneously rob and reward sites at the whims of a magical all-knowing wizard. In this scenario, the goal of every SEO and webmaster is to dupe this wizard and come out on the winning side of every update.

Multiple Algorithms

Nothing could be further from the truth. Google’s algorithm isn’t even a single algorithm, rather it’s a confluence of multiple algorithms. On Google’s guide on how search works, it ALWAYS refers to the algorithm in the plural. If you were to parse the data on this page and read tweets mentioning algorithms written by Googlers it appears that there are three primary algorithms each of which has a different purpose.

  1. Crawling – this is an algorithm designed to crawl and understand the entire web.
  2. Indexing – This algorithm determines how to cache a webpage and what database tags should be used to categorize it.
  3. Ranking – Somewhat self explanatory but it uses the information in the first two algorithms to apply a ranking methodology to every page.

There is also like a fourth primary algorithm which is tasked with just understanding a user’s query and then modifying it something else when the search engine queries the database. This is the algorithm affected by Google’s announcement of BERT.

Understanding Google’s algorithms in this light it might make a lot more sense to how Google could claim to update their algorithm multiple times per day.

These algorithms are extensive and complex software programs which constantly need to be updated based on real scenarios. As anomalies are found by search engineers they are patched as a bug in any software program would be. In every other company this might just be a bug fix, but on search it translates to an algorithm update.

Product updates

In any software company where the software is the product there are product updates that happen multiple times per year. There are always changes being made, some visible and others not so much. As an example, Facebook is constantly tweaking all aspects of their product, they didn’t just launch their news feed many years ago and just leave it. Even our phone operating systems whether Android or OS are updated in a major way at least once per year.

For Google, like any other software company they release updates that take big leaps forward on their product; however, in Google’s case they are called “major algorithm updates” instead of just product updates. This phrasing alone is enough to enduce panic attacks.

Algorithms don’t hurt

Now with this knowledge of what exactly an algorithm update is, it is easier to understand why there really is never a reason to panic. When Google’s product managers determine that there are improvements to make in how the search product functions, they are usually tweaks at the margins. The updates are designed to address flaws in how users experience search. Much like a phone operating system leaps forward in a new update, Google’s major updates make significant improvements in user experiences.

If a site experiences a drop in search traffic after a major algorithm update, it is rarely because the entire site was targeted. Typically, while one collection of URL’s may be demoted in search rankings others more than likely improved.

Understanding what those leaps forward are requires taking a deep dive into Google Search Console to drill into which URL’s saw drops in traffic and others that saw gains. While a site can certainly a steep drop off after an update, its simply because they had more losers than winners, but it is most definitely not because the algorithm punished them.

In many cases, sites might not have even lost traffic – they only lost impressions that were already not converting into clicks. Looking at the most recent update where Google removes the organic listing of sites that have a featured snippet ranking, I have seen steep drops in impressions but the clicks are virtually unchanged.

Declaring a site to be a winner or loser after an update neglects the granular data that might have lead to the significant changes in traffic. It is for this reason that websites should not fear the algorithm if their primary focus is on providing an amazing and high quality experience for users. The only websites that have something to fear are those that should not have had high search visibility because of a poor user experience.

Past algorithm updates

In recent times, it is rare for a site that provides a quality experience for users – determined as satisfying a user’s query intent to have all of their URL’s demoted in an update. If that was truly the case, the site was likely benefitting from a “bug” in how Google worked and was already living on borrowed time. Websites that exploit loopholes in the way that Google ranks the web, should always be aware that Google will eventually close the loophole for the good of the entire Internet.

There were certainly times in the more distant past where entire sites were targeted by algorithm updates, but that is no longer the case. Panda which was designed to root out low quality content, Penguin which demoted unnatural links and medic which demoted incorrect medical information had specific use targets, but other sites were left relatively untouched. If a site was on the losing side of the algorithms prior to that update because competitors were exploiting loopholes, they likely saw significant gains as their competitors dropped out of search.

Updates are a fact of search life

Google will and should always continuously update their algorithms so that their product evolves into something that retains their users. If they would just leave the algorithm alone, they risk being overrun by spammers who take advantage of loopholes and Google will go the way of AOL, Excite, Yahoo and every other search engine that is no longer in existence.

Instead of chasing the algorithm, everyone that relies on search should maintain their focus on the user. The user is the penultimate customer of search and will thereby immunize their site from algorithm updates designed to protect the search experience.  There is no algorithm wizard. The algorithm(s) only have one purpose and that is to help a user find exactly what they seek.

SEO

Domain Authority Scores are NOT KPI’s

Years ago, Google used to have a publicly visible score called Page Rank that ranked the authority of a website based on backlinks. All websites began with a score of zero and as they acquired valuable backlinks the score moved up to a maximum of ten. Most well trafficked sites hovered around a five or six with exceptional sites going as high as seven or eight. Scores of nine or ten were reserved for the most authoritative sites on the Internet like Whitehouse.gov, Adobe and Google itself.

(Side note: In 2009, Google Japan was participating in a scheme that could have been construed as an attempt to build backlinks. As a result they were penalized with a public page rank penalty that dropped their score from nine to five. In reality, this had no impact on their actual rankings but it was perceived as a penalty.)

Aside from being used as an indicator of how valuable a backlink from a particular site might be, the score was utterly useless. Apparently, Google realized that showing visible page rank was facilitating a link acquisition economy that they did not want to exist, so they deprecated the visible aspect of page rank. (Note: Google’s actual ranking algorithm still uses “page rank” they just don’t share a score.)

When Google stopped sharing page rank, other tools that calculated web authority stepped into the void. Moz’s Domain Authority immediately became a popular way of valuing link acquisition efforts, although there were alternatives from SEMRush and Majestic.

Today there are many options for valuing links and any tool that crawls the web will attempt to quantify the value of a website’s inbound links in a single metric. The tools calculate the scores using a methodology that is similar to how Google’s patents claim Google computes this. Each tool will value all links into a site and then compute the value of each of those and so on to get a total score.

As the explanation on each of these sites will elaborate, none of these calculations are used by Google in their rankings and there is no proven impact from the scores on actual rankings. Unlike these vanity metrics, Google computes a score (or multiple scores) in real time with information only available to Google. Google knows which sites are recently hacked and should be excluded, which sites have a pattern of link manipulation and most of all their AI is processing all of the internet at scale.

A site with an authority score (from any tool) of zero can still rank on queries if it is determined to be the most relevant, and a site with a score of one-hundred will not rank on a query where it is not relevant.

In actuality, these scores are nice to know, but are vanity metrics just like rankings. If you are in the link selling business, a higher score will help generate more sales, but otherwise the score will not do much.

Domain Authority is not a goal

Too often, I have heard of SEO teams where one of the goals for the year was to increase their score in one of these tools. This is a futile effort for many reasons, but it also takes their focus away from what is truly important. Even acquiring many valuable links may not influence the score, so there is no way to guarantee that this goal will be achieved.

However, even if the team is able to increase the score by a few notches it will not increase revenue or even organic traffic which is how the SEO team should really be measured. (Note: there have been many correlation studies which indicate a potential relationship between higher scores and more rankings, but these are not causation studies.

What should be the goal

Instead, the SEO team should focus on efforts that increase organic conversions such as creating great content, building a technically sound site and satisfying user intent. Content that is high quality, relevant and helpful will also attract links which might then lift a score. Focusing on the content as the end itself rather than content as a means to an end of a higher authority score is closer to any revenue goal.

If a team is doing this, their SEO efforts will certainly pay off in organic conversions and traffic even if their domain scores never hit some arbitrary magic number. SEO is an acquisition channel that must impact the bottom line and not just vanity metrics.

SEO

How to use the Wayback Machine for SEO

In 2001, a nonprofit named the Internet Archive launched a new tool called the Wayback Machine on the URL: archive.org.

The mission of the Internet Archive was to build a digital library of the Internet’s history, much the same way paper copies of newspapers are saved in perpetuity.

Because webpages are constantly changing, the Wayback Machine crawlers frequently visit and cache pages for the archive.

Their goal was to make this content available for future generations of researchers, historians, and scholars. But this data is just as valuable to marketers and SEO professionals.

Whenever I am working on a project that involves a steep change in traffic either for my core site or a competitors, one of the first places I will look the cached pages before and after the changes in traffic.

Even if you aren’t doing forensic analysis on a site, just having access to a site’s changelog can be a valuable tool.

You can find old content or even recall a promotion that was run in the previous year.

Troubleshooting with the Wayback Machine

Much like looking at a live website, the cached pages will have all the information available that might explain a shift in traffic.

The entire website, with all HTML included, is contained within the cache, which makes it fairly simple to identify obvious structural or technical changes.

In comparing the differences between a before and after image of my site or a competitor’s, I look for issues with:

  • On-page meta.
  • Internal linking.
  • Image usages.
  • And even any dynamic portions of the page that might have been added or removed.

Here are the steps to use the Wayback Machine for troubleshooting.

1. Put your URL into the search box of Archive.org

This does not need to be a homepage. It can be any URL on the site.

How to Use Archived Versions of Websites for SEO Troubleshooting

2. Choose a date where you believe the code may have changed

Note the color coding of the dates:

  • Red means there was an error.
  • Green indicates a redirect happens.
  • Blue means there was a good cache of the page.
How to Use Archived Versions of Websites for SEO Troubleshooting

You may have to continue picking dates and then digging through each version until you find something interesting worth looking at further.

For larger sites, you will find that homepages are cached multiple times per day while other sites will only be cached a few times per year

3. The cached page from archive.org will load in your browser like any website except that it will have a header from Archive.org

Look for obvious changes in structure and content that may have lead to a change in search visibility.

4. Open the source code of the page and search for:

  • Title
  • Description
  • Robots
  • Canonicals
  • JavaScript

5. Compare anything that is different from the current site and analyze causal or correlative relationships

No detail is too small to be investigated. Look at things like cross-links, words used on pages, and even for evidence that a site may have been hacked during a particular time period.

You should even look at the specific language in any calls to action as a change here might impact conversions even if traffic now is higher than the time of the Wayback Machine’s cache.

Robots File Troubleshooting

The Wayback Machine even retains snapshots of robots.txt files so if there was a change in crawling permissions the evidence is readily available.

This feature has been amazingly useful for me when sites seem to have dropped out of the index mysteriously with no obvious penalty, spam attack, or a presently visible issue with a robots.txt file.

To find the robots file history just drop the robots URL into the search box like this

How to Use Archived Versions of Websites for SEO Troubleshooting

After that choose a date and then do a diff analysis between the current robots file. There are a number of free tools online which allow for comparisons between two different sets of text.

Backlink Research

An additional less obvious use case for the Wayback Machine is to identify how competitors may have built backlinks in the past.

Using a tool like Ahrefs I have looked at the “lost” links of a website and then put them into the Wayback Machine to see how they used to link to a target website.

A natural link shouldn’t really get “lost” and this is a great way to see why the links might have disappeared.

Gray Hat Uses

Aside from these incredibly useful ways to use the Wayback Machine to troubleshoot SEO issues, there are also some seedier ways that some use this data.

For those that are building private blog networks (PBNs) for backlink purposes, the archived site is a great way to restore the content of a recently purchased expired domain.

The restored site is then filled with links to other sites in the network.

Affiliates

One other way, again from the darker side of things, that people have used this restored content is to turn it into an affiliate site for that category.

For example, if someone bought an expired domain for a bank, they would restore the content and then place CTAs all over the site to fill out a mortgage form.

The customer might think they were getting in touch with a bank. However, in reality, their contact info is being auctioned off to a variety of mortgage brokers.

Not to end on a dark note, there is one final amazing way to use the Wayback Machine and it is the one intended by the creators of the site.

This is the archive of everything on the web, and if someone was researching Amazon’s atmospheric growth over the last two decades through the progression of their website, this is where they would find an image of what Amazon’s early and every subsequent homepage looked like.

How to Use Archived Versions of Websites for SEO Troubleshooting

Shady use cases aside, the Wayback Machine is one of the best free tools you can have in your digital marketing arsenal. There is simply no other tool that has 18 years of history of nearly every website in the world.


SEO

HTML Sitemaps for SEO – 7 reasons you need one

A sitemap guides your website visitors to where they want to go. It’s where they turn if they haven’t found what they are looking from those dropdown menus.

Beyond helping your visitors navigate your website, which should be the primary focus of any marketing effort, there are many other reasons to use a sitemap.

First, it’s important to understand that there are two types of sitemaps:

  • XML sitemaps
  • HTML sitemaps

What Are XML Sitemaps?

XML sitemaps help search engines and spiders discover the pages on your website.

These sitemaps give search engines a website’s URLs and offer data a complete map of all pages on a site. This helps search engines prioritize pages that they will crawl.

There is information within the sitemap that shows page change frequency on one URL versus others on that website, but it is unlikely that this has any effect on rankings.

An XML sitemap is very useful for large websites that might otherwise take a long time for a spider to crawl through the site.

Every site has a specific amount of crawl budget allocated to their site, so no search engine will simply crawl every URL the first time it encounters it.

An XML sitemap is a good way for a search engine to build its queue of the pages it wants to serve.

What Are HTML Sitemaps?

HTML sitemaps ostensibly serve website visitors. The sitemaps include every page on the website – from the main pages to lower-level pages.

An HTML sitemap is just a clickable list of pages on a website. In its rawest form, it can be an unordered list of every page on a site – but don’t do that.

This is a great opportunity to create some order out of chaos, so it’s worth making the effort.

Why You Should Leverage HTML Sitemaps

While you may already use an XML sitemap – and some insist that an HTML sitemap is no longer necessary – here are seven reasons to add (or keep) an HTML sitemap.

1. Organize Large Websites

Your website will grow in size.

You may add an ecommerce store with several departments or you may expand your product portfolio. Or, more likely, the site just grow as new people are added to a company.

However, this can lead to confusion for visitors who are then confused about where to go or what you have to offer.

The HTML sitemap works in a similar way to a department store or shopping mall map.

The sitemap is a great way for the person maintaining the sitemap to take stock of every page and make sure it has its rightful home somewhere in the site.

This is the directory for users that can’t find the pages they are looking for elsewhere on the site and, as a last resort, this should help them get there.

2. Serve as a Project Manager & Architect

Think of the HTML sitemap as an architectural blueprint for your website.

The sitemap becomes a project management tool. It oversees the structure and connections between pages and subpages.

It’s also a forcing function to make sure that you have a clean hierarchy and taxonomy for the site.

A good sitemap is like a well-organized daily schedule.

As any busy person knows, there’s a big difference between an agenda that has every meeting popped on at random or those that are themed and organized around time blocks.

In either case, an agenda is still an agenda but an organized one is far more useful for everyone.

3. Highlight the Website’s Purpose

As a content-based document, the HTML sitemap serves as a way to further define your website’s specific value.

Enhance this benefit by using SEO to identify the most unique and relevant keywords to include on the sitemap.

Anchor text is a great way of creating keyword relevancy for a page and for pages without many cross-links, a sitemap is an easy alternative to use choice anchor text.

To understand the power of anchor text alone, look at the search results for the query “click here”:

7 Reasons Why an HTML Sitemap Is a Must-Have

4. Speed the Work of Search Engine Crawlers

You want to help those search engines out in any way you can and take control where you can. The assistance includes finding your content and moving it up in the crawl queue.

While an XML sitemap is just a laundry list of links, HTML links are actually the way search crawlers prefer to discover the web

The HTML sitemap helps call attention to that content by putting the spotlight on your website’s most important pages. You can also submit the text version of your sitemap to Google.

5. Increase Search Engine Visibility

With some websites, Google and other search engines may not go through the work of indexing every webpage.

For example, if you have a link on one of your webpages, then search bots may choose to follow that link.

The bots want to verify that the link makes sense. Yet, in doing so, the bots may never return to continue indexing the remaining pages.

The HTML sitemap can direct these bots to get the entire picture of your site and consider all the pages. In turn, this can facilitate the bots’ job and they may stay longer to follow the page navigation laid out for them.

Not only does a taxonomy and hierarchy help users find themselves, but it’s incredibly important for search crawlers, too. The sitemap can help the crawlers understand the website’s taxonomy.

There is no limit to how big a sitemap can be and LinkedIn even has a sitemap which has links to all of their millions of user pages.

7 Reasons Why an HTML Sitemap Is a Must-Have

6. Enable Page Links in a Natural Way to Drive Visitors

Not every page will connect through a link located in a header or footer.

The HTML sitemap can step in and find these ideal connections that address how visitors may look for things.

In this way, the HTML sitemap can reflect a visitor’s journey and guide them from research to purchase. In doing so, this benefit of HTML sitemaps can raise the organic search visibility of these linked pages.

In this instance, the sitemap is the fallback that ensures that there is never a page on a site that is orphaned.

I have seen huge gains in the traffic of sites that had issues with deeper pages not receiving many internal links.

7 Reasons Why an HTML Sitemap Is a Must-Have

7. Identify the Areas Where Site Navigation Could Improve

Once your website grows and you develop more pages, there may be duplicate data, which can be problematic for a search engine.

But, after mapping everything out, you’ll be able to use the sitemap to find the duplication and remove it.

As an aside, this only works if there is an owner of the sitemap that is looking at the sitemap on a semi-regular basis.

Also, when you apply analytics or heat map tools, it may conclude that more visitors are using the HTML sitemap than use navigation.

This is a clear signal that you need to reassess why this is happening if the current navigation is missing the mark.

It’s important to determine how you can change the site architecture to make it easier for visitors to find what they need.

For all these benefits, you’ll want to maintain an HTML sitemap. These benefits save resources (time and money). They also deliver an effective way to guide your website visitors to what they need and help close those sales.

Getting Started

If you don’t have an HTML sitemap but do use a platform like WordPress, I recommend one of the many sitemap plug-ins. The plug-ins automate much of the sitemap development and management process.

For larger sites, it might take running a web crawl like:

The output of this web crawl should then serve as the basis for organizing all of a site’s pages around themes.

After developing the HTML sitemap, don’t forget to put a link on your website that is easy to find.

You can either put the link at the top, as part of a sidebar or in a footer menu that continues to be accessible as visitors move from page to page.

However you look at it, an HTML sitemap is an easy way to get huge benefits without a lot of effort.

SEO

The power of backlinks for SEO

In the early days of the Internet and search, Google differentiated themselves from the other search engines by focusing on quality signals to determine relevancy for a query. Amazingly, the other engines – and yes there were lots of other engines – completely ignored quality and looked at keyword matches to pages in the index.

The primary quality signal that Google used to determine quality is the value of the links that pointed to a specific page or website. The value passed by those inbound links is calculated by the value of their own links and on and on it goes. From Google’s perspective, the Internet is a true web of pages linking to each and connecting all pages together.

Linking understood

Google modeled their ranking algorithm like a traditional academic authority model. An academic paper with a new idea is considered to be more authoritative if it has a large quantity of citations discussing it. At the same time the quantity of those citations has to be qualified by the quality of the citations, so a paper cited by a Nobel laureate would be more valuable than one cited by a high school senior.

Moving this model over to the web Google used the same calculation to value the web. A website that has a link pointing to it from Stanford University would in theory be more valuable than one that only has a link from Kaplan University. It’s not that Google recognizes the Stanford is a highly reputable university with a higher caliber of education than Kaplan because of the Stanford “brand”, rather Stanford has more authority because it has a higher quality of other websites that link to it than Kaplan.

Furthermore, quality is not created by a website alone, the linking page also must have its own authority which is calculated by the internal link flow as well as any external inbound links. From this respect, a link from the Kaplan homepage to a website is likely more valuable from a link standpoint than a private student’s blog on the Stanford domain.

Viewed holistically in this manner, the idea of .edu or .gov having more link authority than a .com is completely false as every domain has to stand on its own within the web based on its own backlinks. It is likely that a .edu or .gov will have more link value to share, but there is no guarantee. Just to underscore this point, Google knows that whitehouse.gov is the most valuable government website not because it is the website but because it has the highest value of incoming links.

Manufactured linking

While Google claims to view hundreds of factors in determining rankings, links have always had a very prominent part of the calculation. The non-link factors are lot more mysterious, but on its face, this link calculation algorithm seems very simple to manipulate. High value links will pass an extraordinary amount of value and help the linked page rise in search rankings.

As a result, almost from the day Google launched its index, huge economies sprung up to help marketers manipulate their rankings via artificial rankings. On the cleaner end of things, there were reporters or websites willing to accept compensation in exchange for a link placement while on the dirtier end there were botnets designed to hack websites just to place links.

In between these two options, there were brokers that assisted websites in finding the perfect place to purchase a link on a permanent or even temporary basis. Up until 2012, all of this link manipulation was remarkably effective and websites that spent vast sums on link building saw their websites dominate valuable positions in Google. But this is not the way Google had been conceived to work. Websites were not supposed to just be able to spend their way to the top of rankings when Google really wanted its index to focus on user experience and relevancy.

Penguin

In 2012, Google released their Penguin algorithm update whose sole purpose was to identify manipulative linking schemes and demote the recipients of the links. When possible, Google nuked entire link networks bringing down sites that linked as well as the sites receiving the links.

For the first few months and even years after Google unveiled this algorithm update, websites were terrified of having their previously undiscovered link building efforts be revealed and lead to a penalty. Sites frantically posted disavow files to Google where they disclosed the shady links they may have had a role in acquiring. Out of fear, websites even disavowed links proactively that they had nothing to do with. This algorithm update gave rise to the concept of negative SEO where a malicious person could point dirty links at a website and then watch Google penalize the receiving website for having dirty inbound links. (Note: Google claims this is not possible, but there are many case studies of negative SEO working).

It has now been almost 8 years since this algorithm update and link buying activity is once again picking up. Websites have become more confident in their abilities to evade Google and use these links to accelerate their SEO efforts. This time it is now called guest posts or sponsored posts rather than outright paid links

Google is smarter than you think

In my opinion, most of the time this is a completely wasted effort not because they will be caught by Google, but because the links just don’t work. What many might forget about Google is that it is a company driven by machine learning and artificial intelligence. Outside of search, Google’s Waymo has driven more autonomous miles than anyone else working on self-driving vehicles. To date, in the 5 million miles driven by Google,  we have not heard of any serious injury or fatality caused by Waymo which means that Google has AI that is good enough to make life and death decisions. This challenge is light years more complex than ranking search results.

In many instances a human reviewer can quickly identify a pattern of artificial links which means that Google’s AI can likely do the same. A website might not get penalized when their artificial links are discovered, but the links themselves will just be discounted from the ranking algorithm. The net result is that any resources expended in acquiring the links was completely for naught.

Link solution

If links are an important component of SEO and they can’t be manipulated, this might seem like a dead end for a website looking to increase rankings. Fortunately, there is a solution and it is one that Google recommends: Build a brand. Brands don’t build links, they get links.

Brands in search

Google has been accused of favoring brands in search, and that should be true simply because users favor brands! Just like in a supermarket we gravitate to the branded products the exact same happens on a search results page. As in the earlier explanation about the value of university links, Google doesn’t give a brand extra credit for being a brand; rather, they recognize brands because they have brand characteristics.

Building a brand on the web is not an easy feat, but the first step is to think like a brand. A brand like Coca-Cola doesn’t seek websites to link to them because they know if they create well designed products, refreshing beverages, and launch campaigns the media will talk about them. A brand focuses on its core product offering first and then seeks to get attention. A non-brand seeks to get attention so they can one day have a great product.

Focus on the right goal

Focus on the product and let marketing tell that product story. The byproduct of that story will establish the brand and lead to links which will reinforce the brand. This does not have to be done without help. Brands use PR agencies to tell their stories, and any company aspiring to be a brand can do the same.

There are amazing PR agencies that are familiar with SEO who can ensure that there are links within promotional campaigns, but the PR is the focus not the link.

Links are and always will be a part of the ranking algorithm, but think of the algorithm like the smart human Google intends it to one day be. If a human could easily detect an unnatural link the algorithm likely could too.

Instead of using precious resources to build those unnatural links instead deploy that effort to build a brand which attracts the links that lead to rankings. A clever infographic designed to inform, a media campaign, a billboard or a unique approach to data can all be used to generate the buzz that leads to links and eventually coveted rankings. Don’t focus on the means to build a brand on search – and instead view the links as the byproduct of brand building Google has always intended for it to be. Links are just a piece of the algorithm designed to inform Google about authority that should already exist.

SEO

B2B SEO vs B2C SEO

The tactics and strategies for SEO are very similar whether the target customer is a consumer, business, non-profit or government. In all cases, the goal is to maximize organic visibility for whomever is looking for this particular website or content. The means to achieve this goal are always the same.

The real difference in strategies between these various end users is the expectations of how SEO will perform and what kind of content should be optimized. Generally, SEO is much higher in the buyer’s funnel than other sources of traffic, however for consumers it is far more likely to have a conversion happen in the same session as the organic click.

This is the breakdown in the types of content that should created and the expectations for each type of user. All of these buckets will be fairly broad as a consumer could be a teenager looking for an idea for a high school paper or a high net worth individual seeking a financial advisor. The same principles apply to B2B that is a sole proprietorship buying for the business as a Fortune 10 company.

With that in mind these are the buckets:

  • Consumer – Usually buying for themselves so there will be less decisionmakers and therefore the buying process is quicker. The consumer wants their information they were seeking and if there is purchase intent, they want to be reassured that the purchase is worthwhile. Content for a consumer should be conversion oriented.
  • Business – At any medium or large company there will be lots of decisionmakers so the goal of the content should be to get the search user to become aware of the brand. Content should be written to get the user to search more or share information to be added to a database. The SEO efforts may have to aim a bit lower to have users join a webinar rather than to buy products or follow the brand on social.
  • Non-profit – a non-profit functions like a business in their buying behavior except they may be more budget conscious. Keep in mind that a non-profit cold range from a small local PTA to a global organization like the Red Cross. Don’t make any assumptions.
  • Governments – are like businesses with the clock rewound to a century ago. The content for government buyers has to establish the business as an entity worth continuing to explore and should focus on building internal advocates. Governments can range from local cities all the way up to national federal agencies. Understand the buying process and target those users.

Having the right expectations before embarking on any optimization effort will better help all stakeholders manage their time and resources as they invest in SEO. Too often B2B SEO campaigns fail because there was an expectation of instant conversions. Knowing that SEO for B2B, government or non-profit is there only to assist other channels be more successful will go a long way in having SEO efforts that everyone is on board supporting.

SEO

Competition and SEO

Competition and competitors are always a touchy subject in any business strategy, but things could get really heated in the digital marketing realm. Unlike in the offline world where attacking a competitor costs a pretty penny and is very visible, online there are many areas of opportunity to unseat a competitor without breaking a sweat.

In the bucket of fair competition, websites can create and promote head to head comparisons, bid on a competitors brand name or call out their competitors in their paid marketing. But, then there are there are the dirty tactics too like negative SEO attacks, snitching on competitors to Google, clicking their ads to drive up their marketing or scraping their content.

All of this adds up to really nasty competition to a site which in many cases is just a faceless entity. This aside, I think many sites approach competition the wrong way with regards to SEO. While websites might think of their competitors as those that have similar product offerings to them, for SEO it is really any site that is targeting the same search terms for whatever purpose.

Who is the competition

In this sense, Wikipedia is just about everyone’s competition even if Wikipedia isn’t selling any products. Similarly any other site that is providing information that might satisfy a user’s query intent should be added to the competitive set. All sites that are in this competitive set should monitored or at least observed on occasion to see how they are growing and any specific tactics they are using to drive growth.

Learning from the competition

Rather than focus on destroying competition, I think it is best to learn from competitors. If there is something that is working for them learn how to do it better. If they are generating traffic from a specific query set but not effectively answer the query’s intent, this is an opportunity to create better content.

Content

In the same vein, if a competitor has created content within a specific topic but left open large gaps in their coverage of the topic, this is an opportunity to do it better. A competitor might have a broad approach that can be better capitalized on with a far more narrow strategy. For example, a pest control product site might be able to beat a competitor by having detailed how to’s on using a product rather than general awareness content.

Links

Observing how and why a competitor receives links is another great strategy for growth. Provided that a competitor is accruing links in above board fashion, trying to understand the intent behind why someone might link to them can lead to even better ideas on where to find new links for your own site. If a competitor is using illegitimate link tactics this might be reassuring that they are probably similarly weak in other areas of the business.

Tools

To dig and learn from competition, I use three primary tools.

  1. Google search – search their site on Google using site queries (site:domain.com) to see how they come up in Google. Observe how many pages they have, title tags, meta descriptions, rogue pages they probably did not want indexed, images and content strategy.
  2. Backlink tool – Any tool will suffice, but I like Ahrefs. Use this tool to dig into their backlinks, top keywords driving traffic, recent trends on performance and any similar sites to them you may not have been aware of before.
  3. A crawling tool – Any cloud or desktop tool will do, but this is where you really learn about how their site is structured and any deeper learnings you could not find on Google yourself. Note, that many sites do not appreciate being crawled for this purpose, so do this at your own risk.

For SEO, competition is not a bad thing at all and it is innate in how search works. Even if you are the only one that has ever offered your service you will inevitably face competition since there will never just be one result on Google. Use competition to shorten your learning curve and develop better strategies. Viewed in this light competition should be welcomed not avoided.

SEO

SEO project management and workflow

Managing SEO as a product means that SEO asks will have to fit within a typical product prioritization process. In many organizations, product requests must be accompanied with detailed information that would allow a product manager to stack rank any request against any other priority.

The stack ranking will typically have the ask as well as details which would help them to calculate the resources and time they might need to complete the request. While every organization might have its own format,  here’s a format I have found to be incredibly useful.

Spreadsheet explained

  1. The first column has a quick summary of the ask
  2. The second column goes into a bit more detail on why it needs to be fixed. This should be explanatory enough that someone could understand it just by reading the spreadsheet.
  3. This column should explain the fix as well as any alternative options. This will give the product manager all the information they need on how to go about assigning the request to any stakeholder
  4. This column scores the impact of the fix on a scale of 1 -10 with 10 being the most impactful
  5. Next I score the effort related to making the fix on a scale of 1 -10 with 10 being the lowest effort.  A ten might be a quick text fix while a one could be a full rebuild
  6. As with anything SEO related, there is a certain amount of guesswork that goes into planning so in this column I score the confidence of the impact and effort on a scale of 1-10
  7. After all these three columns we can get to the stack ranking by adding up the scores. A higher score is the most impactful, low effort and high confidence of success.
  8. The next few columns will be for tracking and coordination. Column eight will have additional notes not captured previously.
  9. This column will record a bug ticket, so anyone looking to follow along on progress can know where to look.
  10. Column ten will have the date it shipped to engineering which is very helpful for bunching work into quarters.
  11. This column shows the assigned person so anyone checking on progress knows who to talk to.
  12. The rest of the columns are additional notes that are very helpful for future tracking
    1. Completed dates
    1. Notes

Too often SEO requests are ignored or not assigned because there is not enough clarity on what is being requested. Using a detailed spreadsheet like this or whatever is more comfortable for company culture will ensure that SEO asks follow the same model as anything else that will come in front of engineering or product teams.

Having a detailed spreadsheet such as this one are also very helpful to have handy when there’s a sudden need to share progress with an executive and there’s a clear list of what has been accomplished or where things are in the pipeline. Additionally, this document is great to hand over to other employees when the SEO person moves on from a company.

SEO

Product led SEO is the secret recipe of the world’s biggest websites

Within Growth circles there is this idea of “product led growth” which upends the whole premise of  marketing a product to promote adoption and instead focuses on getting a great product into the hands of users who become the marketing agents.

In this paradigm, users of the product will the love it so much that they will share it with others in the company and/or their social circles. There may be innate triggers within the product that encourage sharing and thereby force the hands of the original user to share with their networks.

Notable recent examples of companies that were incredibly successful at this are Slack, Dropbox and Zoom which become more useful for individual users as it is adopted within a group. Each of these companies deprioritized traditional marketing and instead focused on building an amazing product that users would just have to share with their networks. This idea has worked amazingly well as each of these companies grew themselves exponentially without spending any substantial funds on early marketing.

When product led growth is successful, the company will have acquired a large segment of users to learn from and then build marketing strategies around. The company can learn what caused the product to be adopted by other teams within a company – naturally and then use marketing to encourage this adoption process to be more deliberate.

SEO led product

I am a proponent of using the same approach for SEO efforts. Too often, SEO efforts begin too simply with just a group of keywords. These keywords are developed by the marketing team or founders based on their own knowledge of the product. These keywords become the stems of keyword research input into any keyword research tool and the output is ideas of related keywords.

This new longer keyword research list becomes the seed for content ideas that will be written and posted on the website. The keyword list becomes a sort of checklist and content road map which doesn’t change much based on actual performance or real time metrics.

Keyword research and SEO efforts don’t end with just a content plan, but it also is incorporated into the product. The very names of the products are determined by keyword research on what a user would be most likely to insert into Google as a query in search of a product. On many occasions using keyword research for this purpose can be very inadequate especially when demand is low or not yet existent for the product, but still it is forced to suffice.

The downside to SEO in this process is that there’s no room for a user feedback loop and much of the content creation is all too manual. Content intended to match keywords is written in somewhat long form and the full library will only scale as fast as the content producers can write. The greatest gap in this approach is that the SEO strategy is laser focused on specific keywords and an expectation of generating a high position on just those keywords. Ranking on targeted keywords, aside from the vanity aspect, is aspirational and may never even be achieved.

Product led SEO

Flipping this script to product led SEO, instead of using SEO to market the product – the product becomes the SEO driver. Many of the most successful websites on the Internet have achieved their organic dominance through this approach. Rather than relying on keywords and content as the bedrock of their SEO efforts, they used a scaled approach which relied more on product and engineering than marketing. Amazon, TripAdvisor, Zillow and even Wikipedia are great examples of product led SEO, but there are thousands of others.

For each of these four companies, before they developed and propagated their product on the Internet there wasn’t even keyword research for them to rely on.

  • Amazon focused on building a great architecture to support a well-indexed site even before the idea of SEO existed. Their site has grown into the SEO magnet it is today by scaling that initial iteration of a well developed product that fit with SEO principles. Had they relied on keyword research to launch the site in the early days of the Internet, they may have over-prioritized the adult keywords that were so popular at that time to the detriment of book ecommerce that no one was yet searching.

  • TripAdvisor didn’t start by creating a “blog” of reviews of the most popular hotels with the most search volume. Instead they built an architecture that could scale and host reviews for every property in the entire world. It may have taken years before they outranked the individual travel blogs and sites that ranked highly on search for the most popular properties, but their reward is that today they rank in the top five of results for every hotel in the world.

  • Zillow didn’t focus all of their SEO efforts on trying to rank for the popular keywords in their space which may have been words like “home value” or “online realtor.” Instead they poured their efforts into building a colossal site which has a page for every single address in the United States. At the time, that may have seemed like a foolish approach as a) no one looked for specific addresses b) they would be competing with Google maps or even Mapquest which was still a thing. Looking at their footprint now where they are visible for every address in the country, they could not have made a better bet.

  • Wikipedia didn’t set out to be the online encyclopedia for what most people were looking for; rather, they set out to be an encyclopedia of everything. Early in their process, having an entry for everything might have seemed like an absolute impossibility. They disregarded the naysayers and built the product that could continuously scale into a repository of everything in every language. There are gaps in their knowledge base and there probably always will be, but they have been undeniably successful at achieving the goal.

Scaling product led SEO

Just like product led growth, the feedback loop drives future iteration of the product. Knowing what resonated on search with both search engines and users will dictate the future roadmap of improvements and adjacent products.

Amazon does not need to make the same SEO leap of faith as they enter new categories as they can be very confident that they will get organic traffic on anything they launch. TripAdvisor’s success in hotel reviews gave them a playbook on how to launch attractions and things to do products. Zillow’s dominance in address search opened up the pathway to organic visibility in the mortgage vertical which had always been one of the most competitive categories on the web.

The twenty year winner

With the clarity of hindsight, product led SEO will always be the clear winner, but it is undoubtedly challenging to envision the success you might see from this approach when first starting an SEO effort. It is my opinion, that there can be a product led SEO angle in every vertical and niche.

Aside from the likely monumental work by engineering teams to build a product that will eventually become an SEO juggernaut, you will need a great degree of patience and even faith. At the outset it will not be clear that there will be demand for the eventual product but remember there was no data that supported the eventual SEO goals of Amazon, TripAdvisor and Zillow either. The best selling point for product led SEO is that after companies are successful with their product first efforts they will eventually also dominate organic results for the most coveted category keywords too.

1 2 3 4 13 14