Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What does Disallow: /french-wines/?* actually do - robots.txt
- 
					
					
					
					
 Hello Mozzers - Just wondering what this robots.txt instruction means: Disallow: /french-wines/?* Does it stop Googlebot crawling and indexing URLs in that "French Wines" folder - specifically the URLs that include a question mark? Would it stop the crawling of deeper folders - e.g. /french-wines/rhone-region/ that include a question mark in their URL? I think this has been done to block URLs containing query strings. Thanks, Luke 
- 
					
					
					
					
 Glad to help, Luke! 
- 
					
					
					
					
 Thanks Logan for your help with this - much appreciated. Really helpful! 
- 
					
					
					
					
 Disallow: /?* is the same thing as Disallow:/?, since the asterisk is a wildcard, both of those disallows prevent any URL that begins with /? from being crawled. And yes, it is incredibly easy to disallow the wrong thing! The robots.txt tester in Search Console (under the Crawl menu) is very helpful for figuring out what a disallow will catch and what it will let by. I highly recommend testing any new disallows there before releasing them into the wild. 
- 
					
					
					
					
 Thanks again Logan. What would Disallow: /?* do because that is what the site I am looking at has implemented. Perhaps it works both ways around? I imagine it's easy to disallow the wrong thing or possibly not disallow the right thing. Ugh. 
- 
					
					
					
					
 Disallow: /*? This disallow literally says to crawlers 'if a URL starts with a slash (all URLs) and has a parameter, don't crawl it'. The * is a wildcard that says anything between / and ? is applicable to the disallow. It's very easy to disallow the wrong this especially in regards to parameters, for this reason I always do these 2 things rather than using robots.txt: - Set the purpose of each parameter in Search Console - Go to Crawl > URL Parameters to configure for your site
- Self-referring canonicals - most people disallow URLs with parameters in robots.txt to prevent indexing, but this only prevents crawling. A self-referring canonical pointing to the root level of that URL will prevent indexing or URLs with parameters.
 Hope that's helpful! 
- 
					
					
					
					
 Thanks Logan - I was just reading: Disallow: /*? # block any URL that includes a ? (and thus a query string) - do you know why the ? comes before the * in this case? 
- 
					
					
					
					
 Hi Luke, You are correct that this was done to block URLs with parameters. However, since there's no wildcard (the asterisk) before the folder name, the URL would have to start with /french-wines/. This disallow is really only preventing crawling on the single URL www.yoursite.com/french-wines/ with any parameters appended. 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		What happens to crawled URLs subsequently blocked by robots.txt?
 We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice? Intermediate & Advanced SEO | | AspenFasteners1
- 
		
		
		
		
		
		Disallow: /jobs/? is this stopping the SERPs from indexing job posts
 Hi, Intermediate & Advanced SEO | | JamesHancocks1
 I was wondering what this would be used for as it's in the Robots.exe of a recruitment agency website that posts jobs. Should it be removed? Disallow: /jobs/?
 Disallow: /jobs/page/*/ Thanks in advance.
 James0
- 
		
		
		
		
		
		If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
 If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL? Intermediate & Advanced SEO | | Gabriele_Layoutweb0
- 
		
		
		
		
		
		Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
 I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found. Intermediate & Advanced SEO | | andyheath0
- 
		
		
		
		
		
		Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
 my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml Intermediate & Advanced SEO | | morg454540
- 
		
		
		
		
		
		Meta Robot Tag:Index, Follow, Noodp, Noydir
 When should "Noodp" and "Noydir" meta robot tag be used? I have hundreds or URLs for real estate listings on my site that simply use "Index", Follow" without using Noodp and Noydir. Should the listing pages use these Noodp and Noydr also? All major landing pages use Index, Follow, Noodp, Noydir. Is this the best setting in terms of ranking and SEO. Thanks, Alan Intermediate & Advanced SEO | | Kingalan10
- 
		
		
		
		
		
		Ending URLs in .html versus /
 Hi there! Currently all the URLs on my website, even the home page, end it .html, such as http://www,consumerbase.com/index.html Is this bad? Intermediate & Advanced SEO | | Travis-W
 Is there any benefit to this? Should I remove it and just have them end with a forward slash?
 If I 301 redirect the old .html URLs to the forward slash URLs, will I lose PA? Thanks!0
- 
		
		
		
		
		
		Could you use a robots.txt file to disalow a duplicate content page from being crawled?
 A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution? Intermediate & Advanced SEO | | gregelwell0
 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				