Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is rel="canonical" pointing at a URL with parameters bad?
-
Context
Our website has a large number of crawl issues stemming from duplicate page content (source: Moz).
According to an SEO firm which recently audited our website, some amount of these crawl issues are due to URL parameter usage. They have recommended that we "make sure every page has a Rel Canonical tag that points to the non-parameter version of that URL…parameters should never appear in Canonical tags."
Here's an example URL where we have parameters in our canonical tag...
http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/
rel="canonical" href="http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/?pageSize=0&pageSizeBottom=0" />
Our website runs on IBM WebSphere v 7.
Questions
- Why it is important that the rel canonical tag points to a non-parameter URL?
- What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters?
- Any advice for correcting this?
Thanks for any help!
-
Thanks for the response, Eric.
My research suggested the same plan of attack: 1) fixing the canonical tags and 2) Google Search Console URL Parameters. It's helpful to get your confirmation.
My best guess is that the parameters you've cited above are not needed for every URL. I agree that this looks like something WebSphere Commerce probably controls. I'm a few organizational layers removed from whoever set this up for us. I'll try to track down where we can control that.
-
Thanks Peter!
-
Peter has a great answer with some good resources referenced, and i'll try to add on a little bit:
1. Why it is important that the rel canonical tag points to a non-parameter URL?
It's important to use clean URLs so search engines can understand the site structure (like Peter mentioned), which will help reduce the potential for index bloat and ranking issues. The more pages out there containing the same content (ie duplicate content), the harder it will be for search engines to determine which is the best page to show in search results. While there is no "duplicate content penalty" there could be a self inflicted wound by providing too many similar options. The canonical tag is supposed to be a level of control for you to tell Google which page is the most appropriate version. In this case it should be the clean URL since that will be where you want people to start. Users can customize from there using faceted navigation or custom options.
2. What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters?
Basically duplicate content and indexing issues. Both of those things you really want to avoid when running an eComm shop since that will make your pages compete with each other for ranking. That could cost ranking, visits, and revenue if implemented wrong.
3. Any advice for correcting this?
Fix the canonical tags on the site would be your first step. Next you would want to exclude those parameters in the parameter handling section of Google Search Console. That will help by telling Google to ignore URLs with the elements you add in that section. It's another step to getting clean URLs showing up in search results.
I tried getting to http://www.chasing-fireflies.com/costumes-dress-up/mens-costumes/ and realize the parameters are showing up by default like: http://www.chasing-fireflies.com/costumes-dress-up/mens-costumes/#w=*&af=cat2:costumedressup_menscostumes%20cat1:costumedressup%20pagetype:products
Are the parameters needed for every URL? Seems like this is a websphere commerce setup kind of thing.
-
Clean (w/o parameters) canonical URL helps Google to understand better your url structure and avoid several mistakes:
https://googlewebmastercentral.blogspot.bg/2013/04/5-common-mistakes-with-relcanonical.html <- mistake N:1
http://www.hmtweb.com/marketing-blog/dangerous-rel-canonical-problems/ <- mistake N:4So - your company that giving this advise is CORRECT! You should provide naked URLs everywhere when it's possible.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Does redirecting from a "bad" domain "infect" the new domain?
Hi all, So a complicated question that requires a little background. I bought unseenjapan.com to serve as a legitimate news site about a year ago. Social media and content growth has been good. Unfortunately, one thing I didn't realize when I bought this domain was that it used to be a porn site. I've managed to muck out some of the damage already - primarily, I got major vendors like Macafee and OpenDNS to remove the "porn" categorization, which has unblocked the site at most schools & locations w/ public wifi. The sticky bit, however, is Google. Google has the domain filtered under SafeSearch, which means we're losing - and will continue to lose - a ton of organic traffic. I'm trying to figure out how to deal with this, and appeal the decision. Unfortunately, Google's Reconsideration Request form currently doesn't work unless your site has an existing manual action against it (mine does not). I've also heard such requests, even if I did figure out how to make them, often just get ignored for months on end. Now, I have a back up plan. I've registered unseen-japan.com, and I could just move my domain over to the new domain if I can't get this issue resolved. It would allow me to be on a domain with a clean history while not having to change my brand. But if I do that, and I set up 301 redirects from the former domain, will it simply cause the new domain to be perceived as an "adult" domain by Google? I.e., will the former URL's bad reputation carry over to the new one? I haven't made a decision one way or the other yet, so any insights are appreciated.
Intermediate & Advanced SEO | | gaiaslastlaugh0 -
Can Google read content that is hidden under a "Read More" area?
For example, when a person first lands on a given page, they see a collapsed paragraph but if they want to gather more information they press the "read more" and it expands to reveal the full paragraph. Does Google crawl the full paragraph or just the shortened version? In the same vein, what if you have a text box that contains three different tabs. For example, you're selling a product that has a text box with overview, instructions & ingredients tabs all housed under the same URL. Does Google crawl all three tabs? Thanks for your insight!
Intermediate & Advanced SEO | | jlo76130 -
Multiple 301 redirects for a HTTPS URL. Good or bad?
I'm working on an ecommerce website that has a few snags and issues with it's coding. They're using https, and when you access the website through domain.com, theres a 301 redirect to http://www.domain.com and then this, in turn, redirected to https://www.domain.com. Would this have a deterimental effect or is that considered the best way to do it. Have the website redirect to http and then all http access is redirected to the https URL? Thanks
Intermediate & Advanced SEO | | jasondexter0 -
Using the Word "Free" in Metadata
Hi Forum! I've searched previous questions, and couldn't find anything related to this. I know the word "free" when used in email marketing can trigger spam filters. If I use the word "free" in my metadata (title tag, description, and keywords just for fun) will I be penalized in any way? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0 -
Any penalty for having rel=canonical tags on every page?
For some reason every webpage of our website (www.nathosp.com) has a rel=canonical tag. I'm not sure why the previous SEO manager did this, but we don't have any duplicate content that would require a canonical tag. Should I remove these tags? And if so, what's the advantage - or disadvantage of leaving them in place? Thank you in advance for your help. -Josh Fulfer
Intermediate & Advanced SEO | | mhans1