Within technology project management there are many popular methods for organizing workflow. The two most common are Waterfall and Agile. There are advantages and disadvantages to each one, but for SEO Agile is the way to go.
Within technology project management there are many popular methods for organizing workflow. The two most common are Waterfall and Agile. There are advantages and disadvantages to each one, but for SEO Agile is the way to go.
In this year’s Super Bowl like all the past years, Google ran a commercial. In prior years, Google highlighted search, but this year they decided to put the focus on the Google Assistant. Based on immediate Twitter responses as well as recaps about the ads that ran this year, Google definitely hit the mark by causing an emotional response.
Very often in conversations about digital marketing, I hear SEO (free search traffic) referred to as the opposite side of the spectrum of SEM (paid search marketing). In this view, SEO and SEM teams are competing with each other for resources and budget. In my opinion, this is not the right way to view these teams and is in fact counterproductive.
In the world of investing whether at an exclusive hedge fund or a basic retirement plan, the effect of compounding growth is a bigger driver of returns than lucky timing or discovering massively undervalued opportunities. Explained simply, compounding is the idea of earning returns – even small ones – which are then added to the principle. Compounding is generating earnings from earnings.
Every time there is a rumor of a Google algorithm update, a general panic ripples through the massive community of people who are heavily reliant on free traffic from Google’s search users. There is a collective holding of breath while the numbers are analyzed and then a sigh of relief (hopefully) of surviving the algo update unscathed.
After the update is released and especially if it’s confirmed by Google, there is a slew of articles and analyses produced attempting to dissect what it is that Google changed and how to win in the new paradigm.
In my opinion, all of this angst in entirely misplaced and is rooted in a fundamental misunderstanding of what exactly happens in a Google algorithm update. The Google algorithm is made out to be some sort of mythical secret recipe cooked up in a lab designed to simultaneously rob and reward sites at the whims of a magical all-knowing wizard. In this scenario, the goal of every SEO and webmaster is to dupe this wizard and come out on the winning side of every update.
Nothing could be further from the truth. Google’s algorithm isn’t even a single algorithm, rather it’s a confluence of multiple algorithms. On Google’s guide on how search works, it ALWAYS refers to the algorithm in the plural. If you were to parse the data on this page and read tweets mentioning algorithms written by Googlers it appears that there are three primary algorithms each of which has a different purpose.
There is also like a fourth primary algorithm which is tasked with just understanding a user’s query and then modifying it something else when the search engine queries the database. This is the algorithm affected by Google’s announcement of BERT.
Understanding Google’s algorithms in this light it might make a lot more sense to how Google could claim to update their algorithm multiple times per day.
These algorithms are extensive and complex software programs which constantly need to be updated based on real scenarios. As anomalies are found by search engineers they are patched as a bug in any software program would be. In every other company this might just be a bug fix, but on search it translates to an algorithm update.
In any software company where the software is the product there are product updates that happen multiple times per year. There are always changes being made, some visible and others not so much. As an example, Facebook is constantly tweaking all aspects of their product, they didn’t just launch their news feed many years ago and just leave it. Even our phone operating systems whether Android or OS are updated in a major way at least once per year.
For Google, like any other software company they release updates that take big leaps forward on their product; however, in Google’s case they are called “major algorithm updates” instead of just product updates. This phrasing alone is enough to enduce panic attacks.
Algorithms don’t hurt
Now with this knowledge of what exactly an algorithm update is, it is easier to understand why there really is never a reason to panic. When Google’s product managers determine that there are improvements to make in how the search product functions, they are usually tweaks at the margins. The updates are designed to address flaws in how users experience search. Much like a phone operating system leaps forward in a new update, Google’s major updates make significant improvements in user experiences.
If a site experiences a drop in search traffic after a major algorithm update, it is rarely because the entire site was targeted. Typically, while one collection of URL’s may be demoted in search rankings others more than likely improved.
Understanding what those leaps forward are requires taking a deep dive into Google Search Console to drill into which URL’s saw drops in traffic and others that saw gains. While a site can certainly a steep drop off after an update, its simply because they had more losers than winners, but it is most definitely not because the algorithm punished them.
In many cases, sites might not have even lost traffic – they only lost impressions that were already not converting into clicks. Looking at the most recent update where Google removes the organic listing of sites that have a featured snippet ranking, I have seen steep drops in impressions but the clicks are virtually unchanged.
Declaring a site to be a winner or loser after an update neglects the granular data that might have lead to the significant changes in traffic. It is for this reason that websites should not fear the algorithm if their primary focus is on providing an amazing and high quality experience for users. The only websites that have something to fear are those that should not have had high search visibility because of a poor user experience.
Past algorithm updates
In recent times, it is rare for a site that provides a quality experience for users – determined as satisfying a user’s query intent to have all of their URL’s demoted in an update. If that was truly the case, the site was likely benefitting from a “bug” in how Google worked and was already living on borrowed time. Websites that exploit loopholes in the way that Google ranks the web, should always be aware that Google will eventually close the loophole for the good of the entire Internet.
There were certainly times in the more distant past where entire sites were targeted by algorithm updates, but that is no longer the case. Panda which was designed to root out low quality content, Penguin which demoted unnatural links and medic which demoted incorrect medical information had specific use targets, but other sites were left relatively untouched. If a site was on the losing side of the algorithms prior to that update because competitors were exploiting loopholes, they likely saw significant gains as their competitors dropped out of search.
Updates are a fact of search life
Google will and should always continuously update their algorithms so that their product evolves into something that retains their users. If they would just leave the algorithm alone, they risk being overrun by spammers who take advantage of loopholes and Google will go the way of AOL, Excite, Yahoo and every other search engine that is no longer in existence.
Instead of chasing the algorithm, everyone that relies on search should maintain their focus on the user. The user is the penultimate customer of search and will thereby immunize their site from algorithm updates designed to protect the search experience. There is no algorithm wizard. The algorithm(s) only have one purpose and that is to help a user find exactly what they seek.
Years ago, Google used to have a publicly visible score called Page Rank that ranked the authority of a website based on backlinks. All websites began with a score of zero and as they acquired valuable backlinks the score moved up to a maximum of ten. Most well trafficked sites hovered around a five or six with exceptional sites going as high as seven or eight. Scores of nine or ten were reserved for the most authoritative sites on the Internet like Whitehouse.gov, Adobe and Google itself.
(Side note: In 2009, Google Japan was participating in a scheme that could have been construed as an attempt to build backlinks. As a result they were penalized with a public page rank penalty that dropped their score from nine to five. In reality, this had no impact on their actual rankings but it was perceived as a penalty.)
Aside from being used as an indicator of how valuable a backlink from a particular site might be, the score was utterly useless. Apparently, Google realized that showing visible page rank was facilitating a link acquisition economy that they did not want to exist, so they deprecated the visible aspect of page rank. (Note: Google’s actual ranking algorithm still uses “page rank” they just don’t share a score.)
When Google stopped sharing page rank, other tools that calculated web authority stepped into the void. Moz’s Domain Authority immediately became a popular way of valuing link acquisition efforts, although there were alternatives from SEMRush and Majestic.
Today there are many options for valuing links and any tool that crawls the web will attempt to quantify the value of a website’s inbound links in a single metric. The tools calculate the scores using a methodology that is similar to how Google’s patents claim Google computes this. Each tool will value all links into a site and then compute the value of each of those and so on to get a total score.
As the explanation on each of these sites will elaborate, none of these calculations are used by Google in their rankings and there is no proven impact from the scores on actual rankings. Unlike these vanity metrics, Google computes a score (or multiple scores) in real time with information only available to Google. Google knows which sites are recently hacked and should be excluded, which sites have a pattern of link manipulation and most of all their AI is processing all of the internet at scale.
A site with an authority score (from any tool) of zero can still rank on queries if it is determined to be the most relevant, and a site with a score of one-hundred will not rank on a query where it is not relevant.
In actuality, these scores are nice to know, but are vanity metrics just like rankings. If you are in the link selling business, a higher score will help generate more sales, but otherwise the score will not do much.
Domain Authority is not a goal
Too often, I have heard of SEO teams where one of the goals for the year was to increase their score in one of these tools. This is a futile effort for many reasons, but it also takes their focus away from what is truly important. Even acquiring many valuable links may not influence the score, so there is no way to guarantee that this goal will be achieved.
However, even if the team is able to increase the score by a few notches it will not increase revenue or even organic traffic which is how the SEO team should really be measured. (Note: there have been many correlation studies which indicate a potential relationship between higher scores and more rankings, but these are not causation studies.
What should be the goal
Instead, the SEO team should focus on efforts that increase organic conversions such as creating great content, building a technically sound site and satisfying user intent. Content that is high quality, relevant and helpful will also attract links which might then lift a score. Focusing on the content as the end itself rather than content as a means to an end of a higher authority score is closer to any revenue goal.
If a team is doing this, their SEO efforts will certainly pay off in organic conversions and traffic even if their domain scores never hit some arbitrary magic number. SEO is an acquisition channel that must impact the bottom line and not just vanity metrics.
In 2001, a nonprofit named the Internet Archive launched a new tool called the Wayback Machine on the URL: archive.org.
The mission of the Internet Archive was to build a digital library of the Internet’s history, much the same way paper copies of newspapers are saved in perpetuity.
Because webpages are constantly changing, the Wayback Machine crawlers frequently visit and cache pages for the archive.
Their goal was to make this content available for future generations of researchers, historians, and scholars. But this data is just as valuable to marketers and SEO professionals.
Whenever I am working on a project that involves a steep change in traffic either for my core site or a competitors, one of the first places I will look the cached pages before and after the changes in traffic.
Even if you aren’t doing forensic analysis on a site, just having access to a site’s changelog can be a valuable tool.
You can find old content or even recall a promotion that was run in the previous year.
Much like looking at a live website, the cached pages will have all the information available that might explain a shift in traffic.
The entire website, with all HTML included, is contained within the cache, which makes it fairly simple to identify obvious structural or technical changes.
In comparing the differences between a before and after image of my site or a competitor’s, I look for issues with:
Here are the steps to use the Wayback Machine for troubleshooting.
This does not need to be a homepage. It can be any URL on the site.
Note the color coding of the dates:
You may have to continue picking dates and then digging through each version until you find something interesting worth looking at further.
For larger sites, you will find that homepages are cached multiple times per day while other sites will only be cached a few times per year
Look for obvious changes in structure and content that may have lead to a change in search visibility.
No detail is too small to be investigated. Look at things like cross-links, words used on pages, and even for evidence that a site may have been hacked during a particular time period.
You should even look at the specific language in any calls to action as a change here might impact conversions even if traffic now is higher than the time of the Wayback Machine’s cache.
The Wayback Machine even retains snapshots of robots.txt files so if there was a change in crawling permissions the evidence is readily available.
This feature has been amazingly useful for me when sites seem to have dropped out of the index mysteriously with no obvious penalty, spam attack, or a presently visible issue with a robots.txt file.
To find the robots file history just drop the robots URL into the search box like this
After that choose a date and then do a diff analysis between the current robots file. There are a number of free tools online which allow for comparisons between two different sets of text.
An additional less obvious use case for the Wayback Machine is to identify how competitors may have built backlinks in the past.
Using a tool like Ahrefs I have looked at the “lost” links of a website and then put them into the Wayback Machine to see how they used to link to a target website.
A natural link shouldn’t really get “lost” and this is a great way to see why the links might have disappeared.
Aside from these incredibly useful ways to use the Wayback Machine to troubleshoot SEO issues, there are also some seedier ways that some use this data.
For those that are building private blog networks (PBNs) for backlink purposes, the archived site is a great way to restore the content of a recently purchased expired domain.
The restored site is then filled with links to other sites in the network.
One other way, again from the darker side of things, that people have used this restored content is to turn it into an affiliate site for that category.
For example, if someone bought an expired domain for a bank, they would restore the content and then place CTAs all over the site to fill out a mortgage form.
The customer might think they were getting in touch with a bank. However, in reality, their contact info is being auctioned off to a variety of mortgage brokers.
Not to end on a dark note, there is one final amazing way to use the Wayback Machine and it is the one intended by the creators of the site.
This is the archive of everything on the web, and if someone was researching Amazon’s atmospheric growth over the last two decades through the progression of their website, this is where they would find an image of what Amazon’s early and every subsequent homepage looked like.
Shady use cases aside, the Wayback Machine is one of the best free tools you can have in your digital marketing arsenal. There is simply no other tool that has 18 years of history of nearly every website in the world.
A sitemap guides your website visitors to where they want to go. It’s where they turn if they haven’t found what they are looking from those dropdown menus.
Beyond helping your visitors navigate your website, which should be the primary focus of any marketing effort, there are many other reasons to use a sitemap.
First, it’s important to understand that there are two types of sitemaps:
XML sitemaps help search engines and spiders discover the pages on your website.
These sitemaps give search engines a website’s URLs and offer data a complete map of all pages on a site. This helps search engines prioritize pages that they will crawl.
There is information within the sitemap that shows page change frequency on one URL versus others on that website, but it is unlikely that this has any effect on rankings.
An XML sitemap is very useful for large websites that might otherwise take a long time for a spider to crawl through the site.
Every site has a specific amount of crawl budget allocated to their site, so no search engine will simply crawl every URL the first time it encounters it.
An XML sitemap is a good way for a search engine to build its queue of the pages it wants to serve.
HTML sitemaps ostensibly serve website visitors. The sitemaps include every page on the website – from the main pages to lower-level pages.
An HTML sitemap is just a clickable list of pages on a website. In its rawest form, it can be an unordered list of every page on a site – but don’t do that.
This is a great opportunity to create some order out of chaos, so it’s worth making the effort.
While you may already use an XML sitemap – and some insist that an HTML sitemap is no longer necessary – here are seven reasons to add (or keep) an HTML sitemap.
Your website will grow in size.
You may add an ecommerce store with several departments or you may expand your product portfolio. Or, more likely, the site just grow as new people are added to a company.
However, this can lead to confusion for visitors who are then confused about where to go or what you have to offer.
The HTML sitemap works in a similar way to a department store or shopping mall map.
The sitemap is a great way for the person maintaining the sitemap to take stock of every page and make sure it has its rightful home somewhere in the site.
This is the directory for users that can’t find the pages they are looking for elsewhere on the site and, as a last resort, this should help them get there.
Think of the HTML sitemap as an architectural blueprint for your website.
The sitemap becomes a project management tool. It oversees the structure and connections between pages and subpages.
It’s also a forcing function to make sure that you have a clean hierarchy and taxonomy for the site.
A good sitemap is like a well-organized daily schedule.
As any busy person knows, there’s a big difference between an agenda that has every meeting popped on at random or those that are themed and organized around time blocks.
In either case, an agenda is still an agenda but an organized one is far more useful for everyone.
As a content-based document, the HTML sitemap serves as a way to further define your website’s specific value.
Enhance this benefit by using SEO to identify the most unique and relevant keywords to include on the sitemap.
Anchor text is a great way of creating keyword relevancy for a page and for pages without many cross-links, a sitemap is an easy alternative to use choice anchor text.
To understand the power of anchor text alone, look at the search results for the query “click here”:
You want to help those search engines out in any way you can and take control where you can. The assistance includes finding your content and moving it up in the crawl queue.
While an XML sitemap is just a laundry list of links, HTML links are actually the way search crawlers prefer to discover the web
The HTML sitemap helps call attention to that content by putting the spotlight on your website’s most important pages. You can also submit the text version of your sitemap to Google.
With some websites, Google and other search engines may not go through the work of indexing every webpage.
For example, if you have a link on one of your webpages, then search bots may choose to follow that link.
The bots want to verify that the link makes sense. Yet, in doing so, the bots may never return to continue indexing the remaining pages.
The HTML sitemap can direct these bots to get the entire picture of your site and consider all the pages. In turn, this can facilitate the bots’ job and they may stay longer to follow the page navigation laid out for them.
Not only does a taxonomy and hierarchy help users find themselves, but it’s incredibly important for search crawlers, too. The sitemap can help the crawlers understand the website’s taxonomy.
There is no limit to how big a sitemap can be and LinkedIn even has a sitemap which has links to all of their millions of user pages.
Not every page will connect through a link located in a header or footer.
The HTML sitemap can step in and find these ideal connections that address how visitors may look for things.
In this way, the HTML sitemap can reflect a visitor’s journey and guide them from research to purchase. In doing so, this benefit of HTML sitemaps can raise the organic search visibility of these linked pages.
In this instance, the sitemap is the fallback that ensures that there is never a page on a site that is orphaned.
I have seen huge gains in the traffic of sites that had issues with deeper pages not receiving many internal links.
Once your website grows and you develop more pages, there may be duplicate data, which can be problematic for a search engine.
But, after mapping everything out, you’ll be able to use the sitemap to find the duplication and remove it.
As an aside, this only works if there is an owner of the sitemap that is looking at the sitemap on a semi-regular basis.
Also, when you apply analytics or heat map tools, it may conclude that more visitors are using the HTML sitemap than use navigation.
This is a clear signal that you need to reassess why this is happening if the current navigation is missing the mark.
It’s important to determine how you can change the site architecture to make it easier for visitors to find what they need.
For all these benefits, you’ll want to maintain an HTML sitemap. These benefits save resources (time and money). They also deliver an effective way to guide your website visitors to what they need and help close those sales.
If you don’t have an HTML sitemap but do use a platform like WordPress, I recommend one of the many sitemap plug-ins. The plug-ins automate much of the sitemap development and management process.
For larger sites, it might take running a web crawl like:
The output of this web crawl should then serve as the basis for organizing all of a site’s pages around themes.
After developing the HTML sitemap, don’t forget to put a link on your website that is easy to find.
You can either put the link at the top, as part of a sidebar or in a footer menu that continues to be accessible as visitors move from page to page.
However you look at it, an HTML sitemap is an easy way to get huge benefits without a lot of effort.
In the early days of the Internet and search, Google differentiated themselves from the other search engines by focusing on quality signals to determine relevancy for a query. Amazingly, the other engines – and yes there were lots of other engines – completely ignored quality and looked at keyword matches to pages in the index.
The primary quality signal that Google used to determine quality is the value of the links that pointed to a specific page or website. The value passed by those inbound links is calculated by the value of their own links and on and on it goes. From Google’s perspective, the Internet is a true web of pages linking to each and connecting all pages together.
Google modeled their ranking algorithm like a traditional academic authority model. An academic paper with a new idea is considered to be more authoritative if it has a large quantity of citations discussing it. At the same time the quantity of those citations has to be qualified by the quality of the citations, so a paper cited by a Nobel laureate would be more valuable than one cited by a high school senior.
Moving this model over to the web Google used the same calculation to value the web. A website that has a link pointing to it from Stanford University would in theory be more valuable than one that only has a link from Kaplan University. It’s not that Google recognizes the Stanford is a highly reputable university with a higher caliber of education than Kaplan because of the Stanford “brand”, rather Stanford has more authority because it has a higher quality of other websites that link to it than Kaplan.
Furthermore, quality is not created by a website alone, the linking page also must have its own authority which is calculated by the internal link flow as well as any external inbound links. From this respect, a link from the Kaplan homepage to a website is likely more valuable from a link standpoint than a private student’s blog on the Stanford domain.
Viewed holistically in this manner, the idea of .edu or .gov having more link authority than a .com is completely false as every domain has to stand on its own within the web based on its own backlinks. It is likely that a .edu or .gov will have more link value to share, but there is no guarantee. Just to underscore this point, Google knows that whitehouse.gov is the most valuable government website not because it is the website but because it has the highest value of incoming links.
While Google claims to view hundreds of factors in determining rankings, links have always had a very prominent part of the calculation. The non-link factors are lot more mysterious, but on its face, this link calculation algorithm seems very simple to manipulate. High value links will pass an extraordinary amount of value and help the linked page rise in search rankings.
As a result, almost from the day Google launched its index, huge economies sprung up to help marketers manipulate their rankings via artificial rankings. On the cleaner end of things, there were reporters or websites willing to accept compensation in exchange for a link placement while on the dirtier end there were botnets designed to hack websites just to place links.
In between these two options, there were brokers that assisted websites in finding the perfect place to purchase a link on a permanent or even temporary basis. Up until 2012, all of this link manipulation was remarkably effective and websites that spent vast sums on link building saw their websites dominate valuable positions in Google. But this is not the way Google had been conceived to work. Websites were not supposed to just be able to spend their way to the top of rankings when Google really wanted its index to focus on user experience and relevancy.
In 2012, Google released their Penguin algorithm update whose sole purpose was to identify manipulative linking schemes and demote the recipients of the links. When possible, Google nuked entire link networks bringing down sites that linked as well as the sites receiving the links.
For the first few months and even years after Google unveiled this algorithm update, websites were terrified of having their previously undiscovered link building efforts be revealed and lead to a penalty. Sites frantically posted disavow files to Google where they disclosed the shady links they may have had a role in acquiring. Out of fear, websites even disavowed links proactively that they had nothing to do with. This algorithm update gave rise to the concept of negative SEO where a malicious person could point dirty links at a website and then watch Google penalize the receiving website for having dirty inbound links. (Note: Google claims this is not possible, but there are many case studies of negative SEO working).
It has now been almost 8 years since this algorithm update and link buying activity is once again picking up. Websites have become more confident in their abilities to evade Google and use these links to accelerate their SEO efforts. This time it is now called guest posts or sponsored posts rather than outright paid links
Google is smarter than you think
In my opinion, most of the time this is a completely wasted effort not because they will be caught by Google, but because the links just don’t work. What many might forget about Google is that it is a company driven by machine learning and artificial intelligence. Outside of search, Google’s Waymo has driven more autonomous miles than anyone else working on self-driving vehicles. To date, in the 5 million miles driven by Google, we have not heard of any serious injury or fatality caused by Waymo which means that Google has AI that is good enough to make life and death decisions. This challenge is light years more complex than ranking search results.
In many instances a human reviewer can quickly identify a pattern of artificial links which means that Google’s AI can likely do the same. A website might not get penalized when their artificial links are discovered, but the links themselves will just be discounted from the ranking algorithm. The net result is that any resources expended in acquiring the links was completely for naught.
If links are an important component of SEO and they can’t be manipulated, this might seem like a dead end for a website looking to increase rankings. Fortunately, there is a solution and it is one that Google recommends: Build a brand. Brands don’t build links, they get links.
Brands in search
Google has been accused of favoring brands in search, and that should be true simply because users favor brands! Just like in a supermarket we gravitate to the branded products the exact same happens on a search results page. As in the earlier explanation about the value of university links, Google doesn’t give a brand extra credit for being a brand; rather, they recognize brands because they have brand characteristics.
Building a brand on the web is not an easy feat, but the first step is to think like a brand. A brand like Coca-Cola doesn’t seek websites to link to them because they know if they create well designed products, refreshing beverages, and launch campaigns the media will talk about them. A brand focuses on its core product offering first and then seeks to get attention. A non-brand seeks to get attention so they can one day have a great product.
Focus on the right goal
Focus on the product and let marketing tell that product story. The byproduct of that story will establish the brand and lead to links which will reinforce the brand. This does not have to be done without help. Brands use PR agencies to tell their stories, and any company aspiring to be a brand can do the same.
There are amazing PR agencies that are familiar with SEO who can ensure that there are links within promotional campaigns, but the PR is the focus not the link.
Links are and always will be a part of the ranking algorithm, but think of the algorithm like the smart human Google intends it to one day be. If a human could easily detect an unnatural link the algorithm likely could too.
Instead of using precious resources to build those unnatural links instead deploy that effort to build a brand which attracts the links that lead to rankings. A clever infographic designed to inform, a media campaign, a billboard or a unique approach to data can all be used to generate the buzz that leads to links and eventually coveted rankings. Don’t focus on the means to build a brand on search – and instead view the links as the byproduct of brand building Google has always intended for it to be. Links are just a piece of the algorithm designed to inform Google about authority that should already exist.