Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What is the best way to treat URLs ending in /?s=
-
-
Hi Alex
These are parameters that sit after the main URL and often include 'sort' 'page'. (They can also be created in some eCommerce pages as 'products' but these should be dealt with a mod-rewrite to show properly constructed URLs with category name and title). There are a number of ways with dealing with them:
1. Google search console - you have to be very careful messing with the rules in parameter handling but for some, this is the way.
- 'sort' then you can tell Google that it narrows the content on the page - you can then choose to let Googlebot decide or block the URLs - I often block them as they just create skinny and duplicate content.
- Pagination - 'page' you can tell Google that this paginates and then let Google decide. Look at rel/prev tag on those pages as well.
- Attributes - like size and colour - I generally block those as they just create skinny duplicates of main categories
- Others - like Catalog - it depends on what platform you use but there could be other parameters being created - I block most of them as they create useless URLs
2. Robots.txt
You can use this file to block the indexing of these pages depending on the parameter by excluding them from being followed by the search bots. Once again be very careful as you don't want to accidentally block indexing of useful areas the site.
https://a-moz.groupbuyseo.org/learn/seo/robotstxt
3. Canonicals
If you are able a great way of dealing with attributes like size and colour is to canonicalize back to the non size specific URL - this is a great way of maintaining the link juice for those URLs which may otherwise be lost if you blocked them all together. You add a rel=canonical tag pointing to the non-parameter version.
https://a-moz.groupbuyseo.org/learn/seo/canonicalization
4. As a last resort you can 301 redirect them but frankly, if you have dealt with them properly you shouldn't have to. It's also bad practice to have live 301 redirects in the internal structure of a website. Best to use the correct URL.
There is more reading here:
https://a-moz.groupbuyseo.org/community/q/which-is-the-best-way-to-handle-query-parameters
https://a-moz.groupbuyseo.org/community/q/do-parameters-in-a-url-make-a-difference-from-an-seo-point-of-view
https://a-moz.groupbuyseo.org/community/q/how-do-i-deindex-url-parametersRegards
Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Difference between urls and referring urls?
Sorry, nit new to this side of SEO We recently discovered we have over 200 critical crawler issues on our site (mainly 4xx) We exported the CSV and it shows both a URL link and a referring URL. Both lead to a 'page not found' so I have two questions? What is the difference between a URL and a referring URL? What is the best practice/how do we fix this issue? Is it one for our web developer? Appreciate the help.
Moz Pro | | ayrutd1 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
Is there any way to move keywords from one Campaign to another?
We recently moved content off a subdomain onto our main www subdomain. Each of these was previously tracked as its own campaign. Now that the content is consolidated, I'd like to move the keywords that we were tracking on the first campaign over to the second. I don't see an option to migrate, or export/import keywords. Is there any (non-manual) way to do this? Thanks
Moz Pro | | doxo2 -
Is there a way to see what keywords users of my site are using to find it online?
Since Google Analytics no longer shows the keywords used by people to find a site online, does the SEOMoz toolset provide somethng to show this data?
Moz Pro | | Mionkeybot0 -
Batch lookup domain authority on list of URL's?
I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround?
Moz Pro | | SirSud1 -
Seomoz Spider/Bot Details
Hi All Our website identifies a list of search engine spiders so that it does not show them the session ID's when they come to crawl, preventing the search engines thinking there is duplicate content all over the place. The Seomoz has bought a over 20k crawl errors on the dashboard due to session ID's. Could someone please give the details for the Seomoz bot so that we can add it to the list on the website so when it does come to crawl it won't show it session ID's and give all these crawl errors. Thanks
Moz Pro | | blagger1