Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Canonical issues using Screaming Frog and other tools?
-
In the Directives tab within Screaming Frog, can anyone tell me what the difference between "canonicalised", "canonical", and "no canonical" means? They're found in the filter box. I see the data but am not sure how to interpret them. Which one of these would I check to find canonical issues within a website? Are there any other easy ways to identify canonical issues?
-
Hello
I spotted this thread and was just about to reply, but Dirk has answered it all perfectly. Thanks Dirk!
Under 'reports' there's also a 'canonical errors' report which will show canonicals with various technical issues - Those that are blocked by robots.txt, have no response, 3XX redirect, 4XX or 5XX error (essentially anything other than a 200 ‘OK’ response). It will also show any URLs discovered only via a canonical, that are not linked to internally from the sites own link structure (in the ‘unlinked’ column when ‘true’).
Hope that helps anyway.
Cheers!
Dan
-
Hi,
The difference between them
-
canonical : url has a canonical url - which can be self-referencing (canonical url = url) or not
-
canonicalised: url has a canonical url which is not self-referencing (canonical url <> url)
-
no canonical : quite obvious - the url has no canonical.
Potential issues could be - url's that you would like to have a canonical don't have a canonical or url's that are canonicalised don't have the right canonical url. You can use the lists (both canonicalised & no canonical) from Screaming Frog to check them - but it's up to you to judge whether the canonical is ok or not (no automated tool can guess what your intentions are).
Typical mistakes with canonicals: all url's have the same canonical url (like the homepage), or have canonical url's that do not exist. You could also check this with Screaming Frog using the setting "respect canonicals" - this way only the canonical url's will be shown in the listing.Also keep in mind that canonical url's are merely a friendly request to Google to index the canonical rather than the normal url - but it's not an obligation for Google to do this (check https://support.google.com/webmasters/answer/139066?hl=en quote: "the search results will be more likely to show users that URL structure. (Note: We attempt to respect this, but cannot guarantee this in all cases.)"
Dirk
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Discovered - currently not indexed issue
Hello all, We have a sitemap with URLs that have mostly user generated content. Profile Overview section. Where users write about their services and some other things. Out of 46K URLs, only 14K are valid according to search console and 32K URLs are excluded. Out of these 32K, 28K are "Discovered - currently not indexed". We can't really update these pages as they have user generated content. However we do want to leverage all these pages to help us in our SEO. So the question is how do we make all of these pages indexable? If anyone can help in the regard, please let me know. Thanks!
Technical SEO | | akashkandari0 -
If I'm using a compressed sitemap (sitemap.xml.gz) that's the URL that gets submitted to webmaster tools, correct?
I just want to verify that if a compressed sitemap file is being used, then the URL that gets submitted to Google, Bing, etc and the URL that's used in the robots.txt indicates that it's a compressed file. For example, "sitemap.xml.gz" -- thanks!
Technical SEO | | jgresalfi0 -
Set Canonical for Paginated Content
Hi Guys, This is a follow up on this thread: http://a-moz.groupbuyseo.org/community/q/dynamic-url-parameters-woocommerce-create-404-errors# I would like to know how I can set a canonical link in Wordpress/Woocommerce which points to "View All" on category pages on our webshop.
Technical SEO | | jeeyer
The categories on my website can be viewed as 24/48 or All products but because the quanity constantly changes viewing 24 or 48 products isn't always possible. To point Google in the right direction I want to let them know that "View All" is the best way to go.
I've read that Google's crawler tries to do this automatically but not sure if this is the case on on my website. Here is some more info on the issue: https://support.google.com/webmasters/answer/1663744?hl=en
Thanks for the help! Joost0 -
Canonical Tag when using Ajax and PhantomJS
Hello, We have a site that is built using an AJAX application. We include the meta fragment tag in order to get a rendered page from PhantomJS. The URL that is rendered to google from PhantomJS then is www.oursite.com/?escaped_fragment= In the SERP google of course doesnt include the hashtag in the URL. So my question, with this setup, do i still need a canonical tag and if i do, would the canonical tag be the escaped fragment URL or the regular URL? Much Appreciated!
Technical SEO | | RevanaDigitalSEO0 -
Is there a tool to see all redirects?
I'm thinking this is a silly question, but I've never had to deal with it I thought I'd ask. Ok is there a tool out there that will show all the redirects to a domain. I'm working on a project that I keep stumbling on urls that redirect to the site I'm studying. They don't show up in Open Site or ahrefs as linking domains, but they keep popping up on me. Any thoughts?
Technical SEO | | BCutrer0 -
Exclude status codes in Screaming Frog
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration. The site has approximately 190,000 pages according to the results of a Google site: command. The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root. There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters. There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs. Does anyone know how to exclude files using status codes? I know that would help. If it helps, the site is kodylighting.com. Thanks in advance for any guidance you can provide.
Technical SEO | | DonnaDuncan0 -
Use of Meta Tag - MSSmartTagsPreventParsing
We've inherited some sites from another developer that had the following tag: All references I can find to it are from 2004. What is the purpose and is it worth including in pages/sites we build?
Technical SEO | | wcksmith0 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0