Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to prevent 404's from a job board ?
- 
					
					
					
					
 I have a new client with a job listing board on their site. I am getting a bunch of 404 errors as they delete the filled jobs. Question: Should we leave the the jobs pages up for extra content and entry points to the site and put a notice like this job has been filled, please search our other job listings ? Or should I no index - no follow these pages ? Or any other suggestions - it is an employment agency site. Overall what would be the best practice going forward - we are looking at probably 20 jobs / pages per month. 
- 
					
					
					
					
 Thank you Brad, There are a few options and before I choose I wanted to get some feedback from some Seo pro's like yourself as you never know when I may learn something new or find a better solution to the things I do. I def. do not want all those 404's on the site. 
- 
					
					
					
					
 From my understanding it's pretty rare to keep old jobs on an employment agency site? Can't you just redirect each URL to a similar job, job category or other relevant section on the site? Better than getting a heap of 404s! 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Over-optimizing Internal Linking: Is this real and, if so, what's the happy medium?
 I have heard a lot about having a solid internal linking structure so that Google can easily discover pages and understand your page hierarchies and correlations and equity can be passed. Often, it's mentioned that it's good to have optimized anchor text, but not too optimized. You hear a lot of warnings about how over-optimization can be perceived as spammy: https://neilpatel.com/blog/avoid-over-optimizing/ But you also see posts and news like this saying that the internal link over-optimization warnings are unfounded or outdated: Intermediate & Advanced SEO | | SearchStan
 https://www.seroundtable.com/google-no-internal-linking-overoptimization-penalty-27092.html So what's the tea? Is internal linking overoptimization a myth? If it's true, what's the tipping point? Does it have to be super invasive and keyword stuffy to negatively impact rankings? Or does simple light optimization of internal links on every page trigger this?1
- 
		
		
		
		
		
		Can subdomains hurt your primary domain's SEO?
 Our primary website https://domain.com has a subdomain https://subDomain.domain.com and on that subdomain we have a jive-hosted community, with a few links to and fro. In GA they are set up as different properties but there are many SEO issues in the jive-hosted site, in which many different people can create content, delete content, comment, etc. There are issues related to how jive structures content, broken links, etc. My question is this: Aside from the SEO issues with the subdomain, can the performance of that subdomain negatively impact the SEO performance and rank of the primary domain? I've heard and read conflicting reports about this and it would be nice to hear from the MOZ community about options to resolve such issues if they exist. Thanks. Intermediate & Advanced SEO | | BHeffernan1
- 
		
		
		
		
		
		SEO on Jobs sites: how to deal with expired listings with "Google for Jobs" around
 Dear community, When dealing with expired job offers on jobs sites from a SEO perspective, most practitioners recommend to implement 301 redirects to category pages in order to keep the positive ranking signals of incoming links. Is it necessary to rethink this recommendation with "Google for Jobs" is around? Google's recommendations on how to handle expired job postings does not include 301 redirects. "To remove a job posting that is no longer available: Remove the job posting from your sitemap. Do one of the following: Note: Do NOT just add a message to the page indicating that the job has expired without also doing one of the following actions to remove the job posting from your sitemap. Remove the JobPosting markup from the page. Remove the page entirely (so that requesting it returns a 404 status code). Add a noindex meta tag to the page." Will implementing 301 redirects the chances to appear in "Google for Jobs"? What do you think? Intermediate & Advanced SEO | | grnjbs07175
- 
		
		
		
		
		
		What's the best possible URL structure for a local search engine?
 Hi Mozzers, I'm working at AskMe.com which is a local search engine in India i.e if you're standing somewhere & looking for the pizza joints nearby, we pick your current location and share the list of pizza outlets nearby along with ratings, reviews etc. about these outlets. Right now, our URL structure looks like www.askme.com/delhi/pizza-outlets for the city specific category pages (here, "Delhi" is the city name and "Pizza Outlets" is the category) and www.askme.com/delhi/pizza-outlets/in/saket for a category page in a particular area (here "Saket") in a city. The URL looks a little different if you're searching for something which is not a category (or not mapped to a category, in which case we 301 redirect you to the category page), it looks like www.askme.com/delhi/search/pizza-huts/in/saket if you're searching for pizza huts in Saket, Delhi as "pizza huts" is neither a category nor its mapped to any category. We're also dealing in ads & deals along with our very own e-commerce brand AskMeBazaar.com to make the better user experience and one stop shop for our customers. Now, we're working on URL restructure project and my question to you all SEO rockstars is, what can be the best possible URL structure we can have? Assume, we have kick-ass developers who can manage any given URL structure at backend. Intermediate & Advanced SEO | | _nitman0
- 
		
		
		
		
		
		Alt tag for src='blank.gif' on lazy load images
 I didn't find an answer on a search on this, so maybe someone here has faced this before. I am loading 20 images that are in the viewport and a bit below. The next 80 images I want to 'lazy-load'. They therefore are seen by the bot as a blank.gif file. However, I would like to get some credit for them by giving a description in the alt tag. Is that a no-no? If not, do they all have to be the same alt description since the src name is the same? I don't want to mess things up with Google by being too aggressive, but at the same time those are valid images once they are lazy loaded, so would like to get some credit for them. Thanks! Ted Intermediate & Advanced SEO | | friendoffood0
- 
		
		
		
		
		
		How do I get rel='canonical' to eliminate the trailing slash on my home page??
 I have been searching high and low. Please help if you can, and thank you if you spend the time reading this. I think this issue may be affecting most pages. SUMMARY: I want to eliminate the trailing slash that is appended to my website. SPECIFIC ISSUE: I want www.threewaystoharems.com to showing up to users and search engines without the trailing slash but try as I might it shows up like www.threewaystoharems.com/ which is the canonical link. WHY? and I'm concerned my back-links to the link without the trailing slash will not be recognized but most people are going to backlink me without a trailing slash. I don't want to loose linkjuice from the people and the search engines not being in consensus about what my page address is. THINGS I"VE TRIED: (1) I've gone in my wordpress settings under permalinks and tried to specify no trailing slash. I can do this here but not for the home page. (2) I've tried using the SEO by yoast to set the canonical page. This would work if I had a static front page, but my front page is of blog posts and so there is no advanced page settings to set the canonical tag. (3) I'd like to just find the source code of the home page, but because it is CSS, I don't know where to find the reference. I have gone into the css files of my wordpress theme looking in header and index and everywhere else looking for a specification of what the canonical page is. I am not able to find it. I'm thinking it is actually specified in the .htaccess file. (4) Went into cpanel file manager looking for files that contain Canonical. I only found a file called canonical.php . the only thing that seemed like it was worth changing was changing line 139 from $redirect_url = home_url('/'); to $redirect_url = home_url(''); nothing happened. I'm thinking it is actually specified in the .htaccess file. (5) I have gone through the .htaccess file and put thes 4 lines at the top (didn't redirect or create the proper canonical link) and then at the bottom of the file (also didn't redirect or create the proper canonical link) : RewriteEngine on Intermediate & Advanced SEO | | Dillman
 RewriteCond %{HTTP_HOST} ^([a-z.]+)?threewaystoharems.com$ [NC]
 RewriteCond %{HTTP_HOST} !^www. [NC]
 RewriteRule .? http://www.%1threewaystoharems.com%{REQUEST_URI} [R=301,L] Please help friends.0
- 
		
		
		
		
		
		May know what's the meaning of these parameters in .htaccess?
 Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR] Intermediate & Advanced SEO | | esiow2013
 RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
 RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
 RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist1
- 
		
		
		
		
		
		Soft 404's from pages blocked by robots.txt -- cause for concern?
 We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages). Should we be concerned? Is there anything we can do about this? Intermediate & Advanced SEO | | nicole.healthline4
 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				