Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Subdomain 403 error
-
Hi Everyone,
A crawler from our SEO tool detects a 403 error from a link from our main domain to a a couple of subdomains. However, these subdomains are perfect accessibly. What could be the problem? Is this error caused by the server, the crawlbot or something else?
I would love to hear your thoughts.
Jens -
no at all
-
Hi Roman,
Thanks for your answer!
It's a commercial tool.
I checked the robots.txt file and .htaccess, but didn't saw any problems.
As you say, the problem can just be caused by the user-agent.If so, this will not affect my SEO efforts, right?
-
Which tool are you using is this a custom tool or commercial tool such as Screamingfrog?
-
These are all for client errors. That means the page wasn’t found and something is wrong with the request. Whatever is happening though, the issue is typically on the client side:
403: Forbidden, So In your case, the first place that you need to check is your .htaccess and your Robots.txt file and make sure that they are not blocking any crawler or at least the crawler of your tools.
For example, some Hosting providers block all the crawlers that are not Google or Bing to save resources. So is usual that Roger (Moz Crawler) has problems to crawl a page that is blocked on the server side. Usually, Moz, Ahrefs, Semrush has this kind of problem so in summary
- Make sure your .htaccess and your Robots.txt is not blocking your crawler
- Make sure your hosting is not blocking your crawler
- If all the above does not work try to modify the user-agent of your tool
Hope this info helps you with your problem
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving E-Commerce Store to Subdomain?
Hi all, We have a customer who currently uses Square for their in-store point-of-sale system as well as for their e-commerce website. From my understanding, a Square site is a watered-down version of Weebly, and is proving to be highly restrictive from an SEO and content structuring standpoint. It's been an uphill battle to try and get traction for their site in SERPs. Would it be a bad idea to move the entire Square online store to a subdomain, and install WordPress on the root domain? This way their online store would remain as-is, but the primary pages on the site would be on WordPress which would give us a lot more control over the content. I just want to make sure this doesn't negatively impact their SEO. Thanks!
Technical SEO | | suarezventures0 -
Subdomain or subfolder?
Hello, We are working on a new site. The idea of the site is to have an ecommerce shop, but the homepage will be a content page, basically a blog page.
Technical SEO | | pinder325
My developer wants to have the blog (home) page on a subdomain, so blog.example.com, because it will be easier to make a nice content page this way, and the the rest of the site will just be on the root domain (example.com). I'm just worried that this will be bad for our SEO efforts. I've always thought it was better to use a sub folder rather than a subdomain. If we get links to the content on the subdomain, will the link juice flow to the shop, on the root domain? What are your thoughts?0 -
Removed Subdomain Sites Still in Google Index
Hey guys, I've got kind of a strange situation going on and I can't seem to find it addressed anywhere. I have a site that at one point had several development sites set up at subdomains. Those sites have since launched on their own domains, but the subdomain sites are still showing up in the Google index. However, if you look at the cached version of pages on these non-existent subdomains, it lists the NEW url, not the dev one in the little blurb that says "This is Google's cached version of www.correcturl.com." Clearly Google recognizes that the content resides at the new location, so how come the old pages are still in the index? Attempting to visit one of them gives a "Server Not Found" error, so they are definitely gone. This is happening to a couple of sites, one that was launched over a year ago so it doesn't appear to be a "wait and see" solution. Any suggestions would be a huge help. Thanks!!
Technical SEO | | SarahLK0 -
Some URLs were not accessible to Googlebot due to an HTTP status error.
Hello I'm a seo newbie and some help from the community here would be greatly appreciated. I have submitted the sitemap of my website in google webmasters tools and now I got this warning: "When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted." How do I fix this? What should I do? Many thanks in advance.
Technical SEO | | GoldenRanking140 -
Best strategy to handle over 100,000 404 errors.
I recently been given a site that has over one-hundred thousand 404 error codes listed in Google Webmasters. It is really odd because according to Google Webmasters, the pages that are linking to these 404 pages are also pages that no longer exist (they are 404 pages themselves). These errors were a result of site migration that had occurred. Appreciate any input on how one might go about auditing and repairing large amounts of 404 errors. Thank you.
Technical SEO | | SEO_Promenade0 -
Should we move our documentation off subdomain?
Background: We have a popular open source e-commerce platform at http://spreecommerce.com. Right now the documentation is on http://guides.spreecommerce.com. We have "edge" documentation (for stuff that's not yet released) on http://edgeguides.spreecommerce.com but since it's largely duplicative we've told google not to index any of the edge stuff (via robots.txt). Question: Should we consider moving the guides under the main website under /docs or something like this? There's a ton of great content that people often read to learn more about the platform. Seems like we might be diluting our juice a bit to have it on a separate domain. WDYT?
Technical SEO | | schof0 -
Best geotargeting strategy: Subdomains or subfolders or country specific domain
How have the relatively recent changes in how G perceives subdomains changed the best route to onsite geotargeting i.e. not building out new country specific sites on country specific and hosted domains and instead developing sub-domains or sub-folders and geo-targeting those via webmaster tools ? In other words, given the recent change in G perception, are sub-domains now a better option than a sub-folder or is there not much in it ? Also if client has a .co.uk and they want to geo-target say France, is the sub-domain/sub-folder route still an option or is the .co.uk still too UK specific, and these options would only work using a .com ? In other words can sites on country specific domains (.co.uk , .fr, .de etc etc) use sub-folders or domains to geo-target other countries or do they have no option other than to develop new country specific (domains/hosting/language) websites ? Any thoughts regarding current best practice in this regard much appreciated. I have seen last Febs WBF which covers geotargeting in depth but the way google perceives subdomains has changed since then Many Thanks Dan
Technical SEO | | Dan-Lawrence0 -
Subdomain Removal in Robots.txt with Conditional Logic??
I would like to see if there is a way to add conditional logic to the robots.txt file so that when we push from DEV to PRODUCTION and the robots.txt file is pushed, we don't have to remember to NOT push the robots.txt file OR edit it when it goes live. My specific situation is this: I have www.website.com, dev.website.com and new.website.com and somehow google has indexed the DEV.website.com and NEW.website.com and I'd like these to be removed from google's index as they are causing duplicate content. Should I: a) add 2 new GWT entries for DEV.website.com and NEW.website.com and VERIFY ownership - if I do this, then when the files are pushed to LIVE won't the files contain the VERIFY META CODE for the DEV version even though it's now LIVE? (hope that makes sense) b) write a robots.txt file that specifies "DISALLOW: DEV.website.com/" is that possible? I have only seen examples of DISALLOW with a "/" in the beginning... Hope this makes sense, can really use the help! I'm on a Windows Server 2008 box running ColdFusion websites.
Technical SEO | | ErnieB0