Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
-
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank?
User-agent: *
Disallow: /
Sitemap: http://www.morganlindsayphotography.com/sitemap.xml
Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
-
Hi there
If you configured this properly, I wouldn't worry about this at all.
Check your internal links and sitemap to make sure that your URLs listed as a reflection of this www. version.
Beyond that, you're all good, no need to block non www.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Using the same image across the site?
Hi just wondering i'm using the same image across 20 pages which are optimized for SEO purposes. I was wondering is there issues with this from SEO standpoint? Will Google devalue the page because the same image is being used? Cheers.
Intermediate & Advanced SEO | | seowork2140 -
This url is not allowed for a Sitemap at this location error using pro-sitemaps.com
Hey, guys, We are using the pro-sitemaps.com tool to automate our sitemaps on our properties, but some of them give this error "This url is not allowed for a Sitemap at this location" for all the urls. Strange thing is that not all of them are with the error and most have all the urls indexed already. Do you have any experience with the tool and what is your opinion? Thanks
Intermediate & Advanced SEO | | lgrozeva0 -
Changing from .com to .com.au
Hi All, we are looking for some guidance please, if at all possible. We have .com domain (the domain is older than 10 years), we have been using it for 2 years. We also have .com.au version of the domain (the domain is 2 years old, pointing to the .com domain) and isn't being used. We are an Australian based company. Our question is, should we be using .com.au instead of .com and if so, how would you advise going about doing the change over without having huge SEO impact on our business (negatively). We are on the home page for most of the searches we have optimized for, but we are always below the .com.au's - which is why we are considering the possibility of the move? Any advice would be GREATLY appreciated 🙂
Intermediate & Advanced SEO | | creativeground0 -
Switching site from non-www to www
Howdy folks, I've got a website that is roughly 3 months old. I created it as a naked URL as I often prefer the look but I've noticed that a lot of my competition has www and also some of my clients seem to prefer it as well. I feel like switching it to www will be of long-term benefit for my site. The problem is that I currently have several pages with first page rankings and a backlinks. I am wondering what the negative effects of switching it to www would be, and how I can minimize any issues. I am guessing I should do a redirect, and I have access to some of the backlinks so I can change those as well, but is there anything else? Thoughts? I appreciate the feedback!
Intermediate & Advanced SEO | | jameswesleyhunt1 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Redirecting non www site
Hello Ladies and Gentlemen. I 100% agree with the redirecting of the non www domain name. After all we see so many times, especially in MOZ how the two different domains contain different links, different DA and of course different PA. So I have posed the question to our IT company, "How would we go about redirecting our non www domain to the www version?", "Where would we do that?", " we cant do the redirect on our webserver because the website is listed as an IP address, not a domain name, so would we do the redirect somewhere at GoDaddy?" who is currently maintain our DNS record So here is the response from IT: " I would setup a CNAME record in DNS (GoDaddy), such that no matter if you go to the bare domain, or the www, you end up in the same place. As for SEO, having a 301 redirect for your bare domain isn't necessary, because both the bare domain and the www are the same domain. 301 is a redirect for "permanently moved" and is common when you change domain names. Using the bare domain or the www are NOT DIFFERENT DOMAINS, so the 301 would not be accurate, and you'd be telling engines you've moved, when you haven't - which may negatively impact your rank. It sounds to me that IT is NOT recommending the redirect. How can this be? Or are we talking about two different things? Will the redirect cause the melt down as the IT company suggests? Or do they nut understand SEO?
Intermediate & Advanced SEO | | Davenport-Tractor0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1