Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What's your best hidden SEO secret?
- 
					
					
					
					
 Work hard play hard and stay away from grey or black areas  
- 
					
					
					
					
 Hah hah. I agree with Richard. The mo community. And I'm also sympatico with Gianluca. Llong walks outside in the fresh air does wonders for my creativity. Ideas come easier. 
- 
					
					
					
					
 To make rest a project after a while. That pause will benefit your creativity, because your brain will work on it in the background without stress. When you return to the project, it will seems new somehow and those ideas your mind was breeding will come out with force. 
- 
					
					
					
					
 I have not been here long enough to have any 'hidden' secrets except SEOmoz  However, I do find that backlinks with great anchors really, really works well. 
- 
					
					
					
					
 That blackhat is a really poor long term strategy  
- 
					
					
					
					
 That one single hour in each day when I get in the "zone" and do week's worth of work in a single shot. Still figuring out how to extend that for the full 8 hours  
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
 I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found. Intermediate & Advanced SEO | | andyheath0
- 
		
		
		
		
		
		What's the best possible URL structure for a local search engine?
 Hi Mozzers, I'm working at AskMe.com which is a local search engine in India i.e if you're standing somewhere & looking for the pizza joints nearby, we pick your current location and share the list of pizza outlets nearby along with ratings, reviews etc. about these outlets. Right now, our URL structure looks like www.askme.com/delhi/pizza-outlets for the city specific category pages (here, "Delhi" is the city name and "Pizza Outlets" is the category) and www.askme.com/delhi/pizza-outlets/in/saket for a category page in a particular area (here "Saket") in a city. The URL looks a little different if you're searching for something which is not a category (or not mapped to a category, in which case we 301 redirect you to the category page), it looks like www.askme.com/delhi/search/pizza-huts/in/saket if you're searching for pizza huts in Saket, Delhi as "pizza huts" is neither a category nor its mapped to any category. We're also dealing in ads & deals along with our very own e-commerce brand AskMeBazaar.com to make the better user experience and one stop shop for our customers. Now, we're working on URL restructure project and my question to you all SEO rockstars is, what can be the best possible URL structure we can have? Assume, we have kick-ass developers who can manage any given URL structure at backend. Intermediate & Advanced SEO | | _nitman0
- 
		
		
		
		
		
		Why is /home used in this company's home URL?
 Just working with a company that has chosen a home URL with /home latched on - very strange indeed - has anybody else comes across this kind of homepage URL "decision" in the past? I can't see why on earth anybody would do this! Perhaps simply a logic-defying decision? Intermediate & Advanced SEO | | McTaggart0
- 
		
		
		
		
		
		Help FORUM ( User generated content ) SEO best practices
 Hello Moz folks ! For the very first time im dealing with a massive community who rely on UGC ( user generated content ). Their forum is finding a great deal of duplicate content/broken link/ duplicate title and on-site issue. I have Advance SEO knowledge related to ecommerce or blogging but new to forum and UGC. I would really love to learn or get ressources links that would allow me to see/understand the best practices in term of SEO. Any help is greatly appreciated. Best, Yan Intermediate & Advanced SEO | | ydesjardins2000
- 
		
		
		
		
		
		Remove URLs that 301 Redirect from Google's Index
 I'm working with a client who has 301 redirected thousands of URLs from their primary subdomain to a new subdomain (these are unimportant pages with regards to link equity). These URLs are still appearing in Google's results under the primary domain, rather than the new subdomain. This is problematic because it's creating an artificial index bloat issue. These URLs make up over 90% of the URLs indexed. My experience has been that URLs that have been 301 redirected are removed from the index over time and replaced by the new destination URL. But it has been several months, close to a year even, and they're still in the index. Any recommendations on how to speed up the process of removing the 301 redirected URLs from Google's index? Will Google, or any search engine for that matter, process a noindex meta tag if the URL's been redirected? Intermediate & Advanced SEO | | trung.ngo0
- 
		
		
		
		
		
		May know what's the meaning of these parameters in .htaccess?
 Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR] Intermediate & Advanced SEO | | esiow2013
 RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
 RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist1
- 
		
		
		
		
		
		ECommerce product listed in multiple places, best SEO practice?
 We have an eCommerce site we have built for a customer and the products are allowed to appear in more than one product category within the web site. Now I know this is a bad idea from a duplicate content point of view, But we are going to allow the customer to select which out of the multiple categories the product appears in will be the default category. This will mean we will have a way of defining what the default url is for a product. So am I correct in thinking all the other urls where the product appears we should add a rel canonical to these pages pointing to the default url to stop duplicate content? Is this the best way? Intermediate & Advanced SEO | | spiralsites0
- 
		
		
		
		
		
		Does Google crawl the pages which are generated via the site's search box queries?
 For example, if I search for an 'x' item in a site's search box and if the site displays a list of results based on the query, would that page be crawled? I am asking this question because this would be a URL that is non existent on the site and hence am confused as to whether Google bots would be able to find it. Intermediate & Advanced SEO | | pulseseo0
 
			
		 
			
		 
			
		 
				
		 
			
		 
			
		 
				
				 
				
				 
				
				 
				
				 
				
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				