When I first moved to the agency scene, I was really surprised to see the kind of keyword research that was being performed on behalf of enterprise SEO clients. More specifically, I was shocked by the fact that virtually all client SEO programs were based on what I call “top down” or “site-centric” keyword research.
In other words, the powers that be at the agency where I was working believed that the first and fundamental step in keyword research was to take a look at the entire client site(s) as a whole and then pull one large “master” keyword list, which would then be used as source of keyword selections at the individual page level. In addition, this master list of keywords would be used to create a baseline ranking report for the program, which would then be compared to monthly ranking report updates in an effort to track the progress of SEO efforts. In some cases, this master keyword list might also be used as guide for tracking traffic and conversion fluctuations via analytics data.
Frankly, I found this to be appalling. Here’s why:
- Even when using sophisticated spidering and algorithm techniques to try and ascertain all of the possible keyword combinations that are related to a specific website (and all of the pages therein) you typically end up with labrynth-like list of keywords that are relatively difficult to map back to individual pages of the site
- You also end up overlooking certain long-tail, niche (and potentially high converting) keywords due to the fact that these mammoth “master” keyword lists require a cutoff point of some sort in terms of the number of estimated searches per month. Therefore, you end up missing diamond-in-the-rough keywords that are potentially the most relevant to a specific page of content as well potentially the best converting.
- From an analytics standpoint, using a “master” list of keywords to setup a baseline is a recipe for disaster. Why? Because more often than not, many of the keywords that make it to that initial “master” list are never actually the subject of optimization efforts. Therefore, any ranking movement (up or down) or fluctuations in traffic/conversion will be erroneously attributed to the SEO program.
Mind you, I’m not saying that the pulling of an initial “master” keyword list is a complete waste of time. Only that it cannot serve as the foundation of keyword research or rankings/analytics measurement.
If you’re currently using this type of methodology, or some variation thereof, and you agree with my premise but aren’t sure how to go about improving your approach, here are some ideas that focus on what I call “page-centric” keyword research:
- Perform keyword research at the individual page level – some of you are reading this and saying “duh.” If that’s the case, good for you, feel free to move onto the next bullet point.If that’s not the case, then here’s what you do. Start with the homepage, then move onto the category pages and then finally down to the deeper content/product pages of your site. Make sure to take the copy and general theme of each page into account and then use one of the many keyword research tools available to identify the absolute best fit for the page you’re working on.
Also, make sure to take volume, competition, and specificity into account. In layman’s terms, think about the whether the keyword phrase is best suited for a high-level page like a category page or even homepage, or if it perhaps belongs on a deeper content page. And if you come across a keyword that doesn’t quite fit on any existing page, then you know that it’s time to make a new page of content!
- Perform analytics based keyword research – in order to do this, you’ll need ranking and traffic/click-thru data. I personally use a combination of Google Analytics and Google Webmaster Tools. What you’re looking to do is identify keywords that have already referred traffic to your site (I use Google Analytics for this) and/or keywords that are receiving impressions via Google (I use Google Webmaster tools for this) and then gauging the impressions/traffic being received compared to the respective ranking for that keyword.
Google Webmaster Tools makes that easy because they provide you with both the impression/click-thru data as well as the average ranking position for every keyword that fits the criteria (how sweet is that!) but you can also use Google Analytics keyword referral data combined with ranking report data in order come up with similar conclusion. Once you identify keywords that hit a certain sweet spot (e.g. they’re close to ranking above the fold and/or already deliver significant traffic/impression despite non-first page rankings) you can then work on identifying which page is showing up for said keyword and work on optimizing tweaks (an update to the title tag, and new internal link with anchor text, and external inbound link, etc).
The great part about this technique is that it helps you find keywords that your traditional keyword research methods might have missed. It also helps spot “money” or “on the cusp” keywords that are on the verge becoming big ROI contributors.
I hope you’ve found this little 101 tutorial helpful. As always, if you have questions or feedback feel free to leave me a comment or contact me via the “contact” form on this site.
P.S. For those of you that use the Google Adwords Keyword Estimator tool, it might be worthwhile to read this post over at Dave Naylor’s blog. I’m not saying to ditch the Google tool altogether. It’s just that this kind of anecdotal evidence is important and will help you frame the way you describe traffic estimates to clients and colleagues.