Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
404 Crawl Diagnostics with void(0) appended to URL
- 
					
					
					
					
 Hello I am getting loads of 404 reported in my Crawl report, all appended with void(0) at the end. For example: http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) 
 The site is running on Drupal 7,Has anyone come across this before? Kind Regards Moshe | http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) | 
- 
					
					
					
					
 I think void(0) problem from WordPress theme if you use WordPress. Or, You can't setup perfectly javascript void(0) code on your template file. See the perfect javascript void(0) link examples on this page of wikihat => Kickass Torrents See the "click to open" button there. 
- 
					
					
					
					
 Hi Moshe! Did this ever work out for you?  
- 
					
					
					
					
 Hi Kane Many thanks for the links. The Google forum link seems like to be the direction. I am not the developer of the site, but will forward the link to them hoping they will help. (been 3 years since site went live). Many thanks Moshe 
- 
					
					
					
					
 Hi Dimitri I am pretty sure that it is only the fact that something is producing links with void(0) at the end. The link I used in my original post should actually be: http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/tongues The MOZ crawl report is saying that the above page is the Referrer to the http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) This repeats itself in many pages on the site. Many thanks 
 Moshe
- 
					
					
					
					
 Hi Moshen, My guess is that somewhere on the site, someone created a pop up window or another load effect and they used void(0) to create a link. The better practice is to create a normal link and control what happens when it's clicked by using Javascript. You could also add rel="nofollow" to those links, but it's less ideal than the first option. These explain the issue as well for additional reference: https://productforums.google.com/forum/#!topic/webmasters/3ShUdX7_GqQ This answer (http://stackoverflow.com/posts/134957/revisions) to this question. 
- 
					
					
					
					
 Hi there. It seems to be that there is something wrong with javascript. Because it seems like piece of JS code. However, even if i remove void part, the page still doesn't exist. You sure it's just "void(0)" problem? 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		URL Length Issue
 MOZ is telling me the URLs are too long. I did a little research and I found out that the length of the URLs is not really a serious problem. In fact, others recommend ignoring the situation. Even on their blog I found this explanation: "Shorter URLs are generally preferable. You do not need to take this to the extreme, and if your URL is already less than 50-60 characters, do not worry about it at all. But if you have URLs pushing 100+ characters, there's probably an opportunity to rewrite them and gain value. This is not a direct problem with Google or Bing - the search engines can process long URLs without much trouble. The issue, instead, lies with usability and user experience. Shorter URLs are easier to parse, copy and paste, share on social media, and embed, and while these may all add up to a fractional improvement in sharing or amplification, every tweet, like, share, pin, email, and link matters (either directly or, often, indirectly)." And yet, I have these questions: In this case, why do I get this error telling me that the urls are too long, and what are the best practices to get this out? Thank You Moz Pro | | Cart_generation1
- 
		
		
		
		
		
		Url-delimiter vs. SEO
 Hi all, Our customer is building a new homepage. Therefore, they use pages, which are generated out of a special module. Like a blog-page out of the blog-module (not only for blogs, also for lightboxes). For that, the programmer is using an url-delimiter for his url-parsing. The url-delimiter is for example a /b/ or /s/. The url would look like this: www.test.ch/de/blog/b/an-article www.test.ch/de/s/management-coaching Does the url-delimiter (/b/ or /s/ in the url) have a negative influence on SEO? Should we remove the /b/ or /s/ for a better seo-performance Thank you in advance for your feedback. Greetings. Samuel Moz Pro | | brunoe10
- 
		
		
		
		
		
		Youtube traffic page url referral
 Hello, How can I see which videos from Youtube that has my domain inserted in their description url drive traffic to my domain? I can see in GA how many visitors are coming from Youtube to my domain, but I can't see what Youtube video pages has driven traffic. Any help? Moz Pro | | xeonet320
- 
		
		
		
		
		
		Block Moz (or any other robot) from crawling pages with specific URLs
 Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot Moz Pro | | Blacktie
 Disallow: /*numberOfStars=0 User-agent: rogerbot
 Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0
- 
		
		
		
		
		
		Crawlers crawl weird long urls
 I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling: Moz Pro | | r.nijkamp
 www.website.com/dogs/dog.html
 www.website.com/dogs/dogs/dog.html
 www.website.com/dogs/dogs/dogs/dog.html
 www.website.com/dogs/dogs/dogs/dogs/dog.html
 www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0
- 
		
		
		
		
		
		Increase of 404 error after change of encoding
 Hello, We just have launch a new version of our website with a new utf-8 encoding. Thing is, we use comma as a separator and since the new website went live, I have a massive increase of 404 error of comma-encoded URL. Here is an example : http://web.bons-de-reduction.com/annuaire%2C321-sticker%2Csite%2Cpromotions%2C5941.html instead of : http://web.bons-de-reduction.com/annuaire,321-sticker,site,promotions,5941.html I check with Screaming Frog SEO and Xenu, I can't manage to find any encoded URL. Is anyone have a clue on how to fix that ? Thanks Moz Pro | | RetailMeNotFr0
- 
		
		
		
		
		
		Site Explorer - No Data Available for this URL
 Hi All I have just joined on the trial offer, im not sure if i can afford the monthly payments, but im hoping SEOmoz will show me that i also cannot afford to be without it! In my proses of learning this site and flicking through each section to see what things do. However when i enter my URL into Site Explorer i get the following message "No Data Available for this URL" My site should be crawl-able, so how do i get to see data for my site/s. I wont post my URL here, as the site has a slightly adult theme. Moz Pro | | jonny512379
 If anyone could confirm if i can post "slightly adult" sites. Best Regards
 Jon0
- 
		
		
		
		
		
		Batch lookup domain authority on list of URL's?
 I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround? Moz Pro | | SirSud1
 
			
		 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				