Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Diagnosing Canonical Errors Is Screaming frog reliable?
-
Morning from suny & warm wetherby UK
On this page http://www.goldsboroughestates.co.uk/how-we-care-for-you/right-to-manage/ screaming frog is citing a canonical error but I'm confused as this piece of code is in place:
http://www.goldsboroughestates.co.uk/About/right-to-manage" />
So my question is please - "Does this page http://www.goldsboroughestates.co.uk/how-we-care-for-you/right-to-manage/ have a caninical error or is screaming frog useless?
Other examples where screaming frog is picking up canonical errors include:
http://www.goldsboroughestates.co.uk/what-our-customers-say/right-to-manage/
http://www.goldsboroughestates.co.uk/buying-a-home/right-to-manage/Oh forgot to say the preffered version is http://www.goldsboroughestates.co.uk/About/right-to-manage/
Any insights welcvome
-
Hey,
Long time since the Question, I was just wondering if you worked it out or not.
Gr.,
Istvan
-
I think Screaming Frog is just warning you that the canonical version doesn't seem to match the display URL. They can't really tell (we have the same problem in SEOmoz tools) what the "right" canonical is - they can just warn of a mismatch.
I'm a bit confused as to the purpose of the dual URLs here. The best canonical implementation is to use one URL. The canonical tag can act as a band-aid, but consistency is still the best defense. Having multiple paths to the same page is rarely beneficial.
-
Having spoke to oiur internal helpdesk (Who I trust & do know what theyre talking about) theyve taken a look at:
http://www.goldsboroughestates.co.uk/footer-links/left/right-to-manage/
http://www.goldsboroughestates.co.uk/how-we-care-for-you/right-to-manage/
http://www.goldsboroughestates.co.uk/buying-a-home/right-to-manage/
http://www.goldsboroughestates.co.uk/what-our-customers-say/right-to-manage/
and I'm afraid they have a different perspective which is they see no canonical problem
Hey ho think I'll just set my head on fire then maybe things will be more clearer
-
Hi Istvan - your advice is good but ive just discovered its not been implemented! Time to kick some ass, I'll update you
-
Hey,
Any news on how it went? I am curious if that was the problem or not.
Gr.,
Istvan
-
Hey,
Maybe this helps you a littlebit: http://www.seomoz.org/blog/an-seos-guide-to-http-status-codes
Dr. Pete's article explains well how the status codes work.
Gr.,
Istvan
-
Wow great anser, I'm on to this now & will updat you with how things went
-
Hey there!
I think I have found what your problem is with you canonical link
In your code you have:
And probably you are somewhere forcing the URls to have a / at the end.
So basically you are confusing browsers and search engine bots, because they now cannot tell which is the real version:
SE enters the page. Then it sees that the right version should be the one WITHOUT a "/" at the end, then that pages has a 301 redirect to the version which HAS a "/" at the end of the URL (but that has a canonical which points out that the preffered version should be ). So it is a non-ending circle.
So if you add a / to the end of your URl, your problem should be solved.
Final thought: Screaming Frog is working well.
I hope this was a solution.
Cheers,
Istvan
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Missing Canonical Tag for a PDF document
Error: Missing Canonical Tag
Technical SEO | | ahmadmdahshan
But URL is not a webpage it is a PDF document, is this fixable?0 -
GMB Bulk Upload Error
Hello! I am continuing to have issues with the bulk upload option.Currently, there are 12 non-verified locations in a location group in my GMB account. I have approximately 6-8 more that need to be added to this group via bulk upload. When uploading the spreadsheet, I receive an error reading "You've exceeded the limit for the about of locations you can upload to Google My Business in a single day. Try again later." It seems to happen specifically to the locations that aren't in my GMB account already. The others, the ones already in the account, are fine and simply read "No updates" when the bulk upload sheet is read. Everything else is marked as an error. Why is it marking some listings as nonviable when they come in via the bulk verification spreadsheet, which has been downloaded directly from the links Google has provided, and filled in with the help of the sample and amenities list?How do we finish uploading all of the remaining locations?I have another group, separate group (same company, groups split into US and International) under my name that may also need a bulk upload - what can I do to avoid this error in the future? Can they still be bulk uploaded to my account after I upload the first location group's listings?If you could provide any guidance, I'd be very grateful.Thanks in advance!
Technical SEO | | kmarsh0 -
Errors In Search Console
Hi All, I am hoping someone might be able to help with this. Last week one of my sites dropped from mid first day to bottom of page 1. We had not been link building as such and it only seems to of affected a single search term and the ranking page (which happens to be the home page). When I was going through everything I went to search console and in crawl errors there are 2 errors that showed up as detected 3 days before the drop. These are: wp-admin/admin-ajax.php showing as response code 400 and also xmlrpc.php showing as response code 405 robots.txt is as follows: user-agent: * disallow: /wp-admin/ allow: /wp-admin/admin-ajax.php Any help with what is wrong here and how to fix it would be greatly appreciated. Many Thanks
Technical SEO | | DaleZon0 -
URL has caps, but canonical does not. Now what?
Hi, Just started working with a site that has the occasional url with a capital, but then the url in the canonical as lower case. Neither, when entered in a browser, resolves to the other. It's a Shopify site. What do you think I should do?
Technical SEO | | 945010 -
Adding Rel Canonical to multiple pages
Hi, Our CMS generates a lot of duplicate content, (Different versions of every page for 3 different font sizes). There are many other reasons why we should drop this current CMS and go with something else, and we are in the process of doing that. But for now, does anyone know how would I do the following: I've created a spreadsheet that contains the following: Column 1: rel="canonical" tag for URL Column 2: Duplicate Content URL # 1 Column 3: Duplicate Content URL # 2 Column 4: Duplicate Content URL # 3 I want to add the tag from column 1 into the head of every page from column 2,3, and 4. What would be a fast way to do this considering that I have around 1800 rows. Check the screenshot of the builtwith.com result to see more information about the website if that helps. Farris bxySL
Technical SEO | | jdossetti0 -
Hreflang on non-canonical pages
Hi! I've been trying to figure out what is the best way to solve this dilemma with duplicate content and multiple languages across domains. 1 product info page 2 same product but GREEN
Technical SEO | | LarsEriksson
3 same product but RED
4 same product but YELLOW **Question: ** Since pages 2,3,4 just varies slightly I use the canonical tag to indicate they are duplicates of page 1. Now I also want to indicate there are other language versions with the_ rel="alternate" hreflang="x" _element. Should I place the _rel="alternate" hreflang="x" _on the canonical page only pointing to the canonical page with "x" language. Should I place the _rel="alternate" hreflang="x" _on all pages pointing to the canonical page with the "x" language? Should I place the _rel="alternate" hreflang="x" _on all pages and then point it to the translated page (even if it is not a canonical page) ? /Lars0 -
404 errors on non-existent URLs
Hey guys and gals, First Moz Q&A for me and really looking forward to being part of the community. I hope as my first question this isn't a stupid one but I was just struggling to find any resource that dealt with the issue and am just looking for some general advice. Basically a client has raised a problem with 404 error pages - or the lack thereof- on non-existent URLs on their site; let's say for example: 'greatbeachtowels.com/beach-towels/asdfas' Obviously content never existed on this page so its not like you're saying 'hey, sorry this isn't here anymore'; its more like- 'there was never anything here in the first place'. Currently in this fictitious example typing in 'greatbeachtowels.com/beach-towels/asdfas**'** returns the same content as the 'greatbeachtowels.com/beach-towels' page which I appreciate isn't ideal. What I was wondering is how far do you take this issue- I've seen examples here on the seomoz site where you can edit the URI in a similar manner and it returns the same content as the parent page but with the alternate address. Should 404's be added across all folders on a site in a similar way? How often would this scenario be and issue particularly for internal pages two or three clicks down? I suppose unless someone linked to a page with a misspelled URL... Also would it be worth placing 301 redirects on a small number of common mis-spellings or typos e.g. 'greatbeachtowels.com/beach-towles' to the correct URLs as opposed to just 404s? Many thanks in advance.
Technical SEO | | AJ2340 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050