Pagination is something that should be easy, but when it goes wrong, it can really go wrong, and impact crawling & indexation.
It’s one of the first things I go to clean up, as it’s a great way to help reduce the overall crawlable pages by a site, particularly one that’s wasting their crawl budget.
Folders or query parameters for pagination?
Provided there are no tech issues that make it not possible, I recommend that query parameters are used for pagination.
Page=x is just such a clear signal to Google that its page related, and I’m all about making things clear for a robot.
Are Rel next & Rel Prev still required tags?
No, you do not need to use the rel and prev tags anymore for Google.
One of the Google engineers did a presentation in Australia, and mentioned how this conversation about the tags went.
Mueller walked in one day and essentially said “You know we’re not using these tags anymore?”, they looked at each other, and then the tags were deprecated.
Someone accidentally / unknowingly removed them from being checked and no one noticed, so they dropped their use of them.
They are still getting used by other search engines, so if Google isn’t the primary in your market, you should definitely still include the tags.
SEO Pagination best practices
Pagination best practices are pretty simple at the core.
- Always link to the first page
- Link to the next couple of pages, and a couple of previous pages
- Use a clean parameter where possible, with no parameter for the first page
- Ensure the pagination query parameter is in the canonical tag
- Use rel next/prev tags if you want / if they’re easy enough
- Don’t link to the final page in a series from the early pages
- Limit page counts where possible
- 301 redirect any page counts outside of available range back to the first page
- Include a decent amount of results per page, 20-30, rather than many pages of fewer results
Common SEO issues with pagination
There are a handful of issues I tend to look at when auditing a pagination setup.
1. Page 1 (the default URL) has a query parameter on internal links
Some default e-commerce setups will include ?page=1 parameter on the end of links back to the first page.
By default though, this first page has no query parameter.
So you then have the following created as duplicate pages;
Exactly the same page, with exactly the same content, yet two different URLs.
Even with page=1 being stripped with a parameter, you’re actively linking to an alternate version of a page confusing Google.
John Mueller made a comment re: UTM tags on internal links, but the comment directly applies to this scenario too.
More from @johnmu about utm params: Our systems try to understand the different urls… so send Google clear signals. Rel canonical is a strong signal, but so are internal links. You could also be causing more crawling by using those parameters: https://t.co/vCz9gjpFTQ pic.twitter.com/y3XydkGB09
— Glenn Gabe (@glenngabe) February 19, 2019
What you’re asking Google to rank, and what you’re linking two, are separate pieces of content.
Why confuse Google?
Just ensure that any links back to page 1 exactly match what you’re expecting, which would be without the query parameter.
2. Canonical tag doesn’t include the query parameter
Canonical tags must include the pagination query parameter.
Google even mentions this on their ‘common canonical mistakes’ blog post here.
3. The final page is linked to from the first page
Linking to the final page, is what every single pagination linkset does. Ever.
Why would you link to the end, when the most important results are on the first set of pages?
I always recommend removing the link to the final page in a set, and instead ensuring the first few links are available.
If Google wants to crawl all the way to page 345 it can, naturally, going page by page in the order of priority you have set.
4. Not including enough results per page
Some websites won’t use their content to its full potential. Instead, they’ll limit each page to 10 results, and just have more pages.
Bring some of that hidden content forward, to the first page.
Include more results, at least 20, maybe even 30 nowadays, and make your primary page for the search result stronger.
5. No limits on pagination
The final one here is allowing as many pages of results to be created, as possible.
Will a user ever click through to page 345?
I highly doubt it.
But a scraper will, and they’ll take all your data.
You’ll also be re-using the same set of results, over and over across the site.
By limiting the amount of pagination, you severely limit the reuse of the same listings.
IMO, this then gives the listings more value for when they’re used in the top couple pages.
Limit the pagination to what you feel is reasonable, maybe page 20, or page 50 tops, and you can start to craft crawl behavior a little more.
With pagination limited, you should 301 redirect any page outside this range back to the first page.
That will help with the initial URL cull, and will also help clean up URLs when page counts lower due to less results.
Is infinite scroll or pagination better for SEO?
Well, there are ways to make infinite scroll work. If you really want it.
However, pagination is so much easier to not only implement but to monitor and patch issues with, so it is my preferred go-to option should I be given the choice.
Handling pagination is easy
Handling pagination cleanly, and efficiently, is actually pretty easy. Once you tick a few boxes and set some limits.