Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is rel="canonical" pointing at a URL with parameters bad?
-
Context
Our website has a large number of crawl issues stemming from duplicate page content (source: Moz).
According to an SEO firm which recently audited our website, some amount of these crawl issues are due to URL parameter usage. They have recommended that we "make sure every page has a Rel Canonical tag that points to the non-parameter version of that URL…parameters should never appear in Canonical tags."
Here's an example URL where we have parameters in our canonical tag...
http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/
rel="canonical" href="http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/?pageSize=0&pageSizeBottom=0" />
Our website runs on IBM WebSphere v 7.
Questions
- Why it is important that the rel canonical tag points to a non-parameter URL?
- What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters?
- Any advice for correcting this?
Thanks for any help!
-
Thanks for the response, Eric.
My research suggested the same plan of attack: 1) fixing the canonical tags and 2) Google Search Console URL Parameters. It's helpful to get your confirmation.
My best guess is that the parameters you've cited above are not needed for every URL. I agree that this looks like something WebSphere Commerce probably controls. I'm a few organizational layers removed from whoever set this up for us. I'll try to track down where we can control that.
-
Thanks Peter!
-
Peter has a great answer with some good resources referenced, and i'll try to add on a little bit:
1. Why it is important that the rel canonical tag points to a non-parameter URL?
It's important to use clean URLs so search engines can understand the site structure (like Peter mentioned), which will help reduce the potential for index bloat and ranking issues. The more pages out there containing the same content (ie duplicate content), the harder it will be for search engines to determine which is the best page to show in search results. While there is no "duplicate content penalty" there could be a self inflicted wound by providing too many similar options. The canonical tag is supposed to be a level of control for you to tell Google which page is the most appropriate version. In this case it should be the clean URL since that will be where you want people to start. Users can customize from there using faceted navigation or custom options.
2. What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters?
Basically duplicate content and indexing issues. Both of those things you really want to avoid when running an eComm shop since that will make your pages compete with each other for ranking. That could cost ranking, visits, and revenue if implemented wrong.
3. Any advice for correcting this?
Fix the canonical tags on the site would be your first step. Next you would want to exclude those parameters in the parameter handling section of Google Search Console. That will help by telling Google to ignore URLs with the elements you add in that section. It's another step to getting clean URLs showing up in search results.
I tried getting to http://www.chasing-fireflies.com/costumes-dress-up/mens-costumes/ and realize the parameters are showing up by default like: http://www.chasing-fireflies.com/costumes-dress-up/mens-costumes/#w=*&af=cat2:costumedressup_menscostumes%20cat1:costumedressup%20pagetype:products
Are the parameters needed for every URL? Seems like this is a websphere commerce setup kind of thing.
-
Clean (w/o parameters) canonical URL helps Google to understand better your url structure and avoid several mistakes:
https://googlewebmastercentral.blogspot.bg/2013/04/5-common-mistakes-with-relcanonical.html <- mistake N:1
http://www.hmtweb.com/marketing-blog/dangerous-rel-canonical-problems/ <- mistake N:4So - your company that giving this advise is CORRECT! You should provide naked URLs everywhere when it's possible.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Partial Match or RegEx in Search Console's URL Parameters Tool?
So I currently have approximately 1000 of these URLs indexed, when I only want roughly 100 of them. Let's say the URL is www.example.com/page.php?par1=ABC123=&par2=DEF456=&par3=GHI789= All the indexed URLs follow that same kinda format, but I only want to index the URLs that have a par1 of ABC (but that could be ABC123 or ABC456 or whatever). Using URL Parameters tool in Search Console, I can ask Googlebot to only crawl URLs with a specific value. But is there any way to get a partial match, using regex maybe? Am I wasting my time with Search Console, and should I just disallow any page.php without par1=ABC in robots.txt?
Intermediate & Advanced SEO | | Ria_0 -
Should I disallow all URL query strings/parameters in Robots.txt?
Webmaster Tools correctly identifies the query strings/parameters used in my URLs, but still reports duplicate title tags and meta descriptions for the original URL and the versions with parameters. For example, Webmaster Tools would report duplicates for the following URLs, despite it correctly identifying the "cat_id" and "kw" parameters: /Mulligan-Practitioner-CD-ROM
Intermediate & Advanced SEO | | jmorehouse
/Mulligan-Practitioner-CD-ROM?cat_id=87
/Mulligan-Practitioner-CD-ROM?kw=CROM Additionally, theses pages have self-referential canonical tags, so I would think I'd be covered, but I recently read that another Mozzer saw a great improvement after disallowing all query/parameter URLs, despite Webmaster Tools not reporting any errors. As I see it, I have two options: Manually tell Google that these parameters have no effect on page content via the URL Parameters section in Webmaster Tools (in case Google is unable to automatically detect this, and I am being penalized as a result). Add "Disallow: *?" to hide all query/parameter URLs from Google. My concern here is that most backlinks include the parameters, and in some cases these parameter URLs outrank the original. Any thoughts?0 -
"sex" in non-adult domain name
I have a client with a domain that has "sex" in the domain name. For example, electronicsexpo.com. The domain ranks for a few keywords related to the services offered. It is an old domain that has been online for over 10 years. It ranks well for local keywords. No real SEO effort has been made on this domain, so it is rather a clean slate. I am going to be doing SEO on this site. Will the fact that the word "sex" exists in the name have any sort of negative consequence. There is ABSOLUTELY NOTHING adult related or pornographic on this site. I would think that search engines are sophisticated enough to differentiate, but would potential customers with things like parental filters be blocked from viewing content? Is this hurtful in anyway? If so, would I be better off changing domain names? TIA
Intermediate & Advanced SEO | | inhouseseo0 -
Using a lot of "Read More" Hidden text
My site has a LOT of "read more" and when a user click they will see a lot of text. "read more" is dark blue bold and clear to the user. It is the perfect for the user experience, since right below I have pictures and videos which is what most users want. Question: I expect few users will click "Read more" (however, some users will appreciate chance to read and learn more) and I wonder if search engines may think I am hiding text and this is a risky approach or simply discount the text as having zero value from an SEO perspective? Or, equally important: If the text was NOT hidden with a "Read more" would the text actually carry more SEO value than if it is hidden under a "read more" even though users will NOT read the text anyway? If yes, reason may be: when the text is not hidden, search engines cannot see that users are not reading it and the text carry more weight from an SEO perspective than pages where text is hidden under a "Read more" where users rarely click "read more".
Intermediate & Advanced SEO | | khi50 -
Should I NOFOLLOW my "Add To Cart" buttons?
Hello and Merry Christmass Should I NOFOLLOW my "Add To Cart" buttons? My e-commerce site has hundreds of products. Content wise, there is no real value to the reader on that page (besides for some testimonials and "why here" sentences). So it is not a page you'd want / expect to find in the SERPs. Also, with hundreds of links pointing to this page it would be "stronger" than other important pages which doesn't make sense. Last but not least, if I have limited time that the bots are on my site, why keep sending them to a non important page. This is why I am leaning to nofollowing the "add to cart" buttons and looking for reinforcements. Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Using the Word "Free" in Metadata
Hi Forum! I've searched previous questions, and couldn't find anything related to this. I know the word "free" when used in email marketing can trigger spam filters. If I use the word "free" in my metadata (title tag, description, and keywords just for fun) will I be penalized in any way? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
How Rel=Prev & Rel=Next work for me?
I have implemented Rel=Prev & Rel=Next tag on my website. I would like to give example URL to know more about it. http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=3 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=4 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=5 Right now, I have blocked paginated pages by Robots.txt by following query. Disallow: /*?p= I have removed disallow syntax from Robots.txt for paginated pages. But, I have confusion with duplicate page title. If you will check all 3 pages so you will find out duplicate page title across all pages. I know that, duplicate page title is harmful for SEO. Will Google crawl + index all paginated pages? If yes so which page will get maximum benefits in organic ranking? Is there any specific way which may help me to solve this issue?
Intermediate & Advanced SEO | | CommercePundit0