Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why are bit.ly links being indexed and ranked by Google?
- 
					
					
					
					
 I did a quick search for "site:bit.ly" and it returns more than 10 million results. Given that bit.ly links are 301 redirects, why are they being indexed in Google and ranked according to their destination? I'm working on a similar project to bit.ly and I want to make sure I don't run into the same problem. 
- 
					
					
					
					
 Given that Chrome and most header checkers (even older ones) are processing the 301s, I don't think a minor header difference would throw off Google's crawlers. They have to handle a lot. I suspect it's more likely that either: (a) There was a technical problem the last time they crawled (which would be impossible to see now, if it had been fixed). (b) Some other signal is overwhelming or negating the 301 - such as massive direct links, canonicals, social, etc. That can be hard to measure. I don't think it's worth getting hung up on the particulars of Bit.ly's index. I suspect many of these issues are unique to them. I also expect problems will expand with scale. What works for hundreds of pages may not work for millions, and Google isn't always great at massive-scale redirects. 
- 
					
					
					
					
 Here's something more interesting. Bitly vs tiny.cc I used http://web-sniffer.net/ to grab the headers of both and with bitly links, I see an HTTP Response Header of 301, followed by "Content", but with tiny.cc links I only see the header redirect. Two links I'm testing: Bitly response: Content (0.11 <acronym title="KibiByte = 1024 Byte">KiB</acronym>)<title></span>bit.ly<span class="tag"></title> <a< span="">href="https://twitter.com/KPLU">moved here</a<>
- 
					
					
					
					
 I was getting 301->403 on SEO Book's header checker (http://tools.seobook.com/server-header-checker/), but I'm not seeing it on some other tools. Not worth getting hung up on, since it's 1 in 70M. 
- 
					
					
					
					
 I wonder why you're seeing a 403, I still see a 200. http://www.wlns.com/story/24958963/police-id-adrian-woman-killed-in-us-127-crash200: HTTP/1.1 200 OK- Server IP Address: 192.80.13.72
- ntCoent-Length: 60250
- Content-Type: text/html; charset=utf-8
- Server: Microsoft-IIS/6.0
- WN: IIS27
- P3P: CP="CAO ADMa DEVa TAIa CONi OUR OTRi IND PHY ONL UNI COM NAV INT DEM PRE"
- X-Powered-By: ASP.NET
- X-AspNet-Version: 4.0.30319
- wn_vars: CACHE_DB
- Content-Encoding: gzip
- Content-Length: 13213
- Cache-Control: private, max-age=264
- Expires: Wed, 19 Mar 2014 21:38:36 GMT
- Date: Wed, 19 Mar 2014 21:34:12 GMT
- Connection: keep-alive
- Vary: Accept-Encoding
 
- 
					
					
					
					
 I show the second one (bit.ly/O6QkSI) redirecting to a 403. Unfortunately, these are only anecdotes, and there's almost no way we could analyze the pattern across 70M indexed pages without a massive audit (and Bitly's cooperation). I don't see anything inherently wrong with their setup, and if you noticed that big of a jump (10M - 70M), it's definitely possible that something temporarily went wrong. In that case, it could take months for Google to clear out the index. 
- 
					
					
					
					
 I looked at all 3 redirects and they all showed a single 301 redirect to a 200 destination for me. Do you recall which one was a 403? Looking at my original comment in the question, last month bit.ly had 10M results and now I'm seeing 70M results, which means there was a [relatively] huge increase with indexed shortlinks. I also see 1000+ results for "mz.cm" which doesn't seem much strange, since mz.cm is just a CNAME to the bitly platform. I found another URL shortner which has activity, http://scr.im/ and I only saw the correct pages being indexed by Google, not the short links. I wonder if the indexing is particular to bitly and/or the IP subnet behind bitly links. I looked at another one, bit.do, and their shortlinks are being indexed. Back to square 1. 
- 
					
					
					
					
 One of those 301s to a 403, which is probably thwarting Google, but the other two seem like standard pages. Honestly, it's tough to do anything but speculate. It may be that so many people are linking to or sharing the short version that Google is choosing to ignore the redirect for ranking purposes (they don't honor signals as often as we like to think). It could simply be that some of them are fairly freshly created and haven't been processed correctly yet. It could be that these URLs got indexed when the target page was having problems (bad headers, down-time, etc.), and Google hasn't recrawled and refreshed those URLs. I noticed that a lot of our "mz.cm" URLs (Moz's Bitly-powered short domain) seem to be indexed. In our case, it looks like we're chaining two 301s (because we made the domain move last year). It may be that something as small as that chain could throw off the crawlers, especially for links that aren't recrawled very often. I suspect that shortener URLs often get a big burst of activity and crawls early on (since that's the nature of social sharing) but then don't get refreshed very often. Ultimately, on the scale of Bit.ly, a lot can happen. It may be that 70M URLs is barely a drop in the bucket for Bit.ly as well. 
- 
					
					
					
					
 I spot checked a few and I noticed some are only single 301 redirects. And looking at the results for site:bit.ly, some even have breadcrumbs ironically enough. Here are a few examples <cite class="_md">bit.ly/M5onJO</cite> None of these should be indexed, but for some reason they are. Presently I see 70M pages indexed for "bit.ly" I see almost 600,000 results for "bitly.com" 
- 
					
					
					
					
 It looks like bit.ly is chaining two 301s: the first one goes to feedproxy.google.com (FeedProxy is like AdSense for feeds, I think), and then the second 301 goes to the destination site. I suspect this intermediary may be part of the problem. 
- 
					
					
					
					
 I wasn't sure on this one, but found this on readwrite.com. "Bit.ly serves up links to Calais and gets back a list of the keywords and concepts that the linked-to pages are actually about. Think of it as machine-performed auto tagging with subject keywords. This structured data is much more interesting than the mere presence of search terms in a full text search." Perhaps this structured data is submitted to Google?? Any other ideas? 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		How to get back links with higher rank ?
 Hi All , These days I am finding new ways of creating back links. Could any one tell me how to get backlinks with higher DA ? Intermediate & Advanced SEO | | mozentution2
- 
		
		
		
		
		
		Rankings rise after improving internal linking - then drop again
 I'm working on a large scale publishing site. I can increase search rankings almost immediately by improving internal linking to targeted pages, sometimes by 40 positions but after a day or two these same rankings drop down again, not always as low as before but significantly lower than their highest position. My theory is that the uplift generated by the internal linking is subsequently mitigated by other algorithmic factors relating to content quality or site performance or is this unlikely? Does anyone else have experience of this phenomenon or any theories? Intermediate & Advanced SEO | | hjsand1
- 
		
		
		
		
		
		Help! The website ranks fine but one of my web pages simply won't rank on Google!!!
 One of our web pages will not rank on Google. The website as a whole ranks fine except just one section...We have tested and it looks fine...Google can crawl the page no problem. There are no spurious redirects in place. The content is fine. There is no duplicate page content issue. The page has a dozen product images (photos) but the load time of the page is absolutely fine. We have the submitted the page via webmaster and its fine. It gets listed but then a few hours later disappears!!! The site has not been penalised as we get good rankings with other pages. Can anyone help? Know about this problem? Intermediate & Advanced SEO | | CayenneRed890
- 
		
		
		
		
		
		My website is not ranking for primary keywords in Google
 I need help regarding some SEO strategy that need to be implemented to my website http://goo.gl/AiOgu1 . My website is a leading live chat product, daily it receives around 2000 unique visitors. Initially the website was impacted by manual link penalty, I cleaned up lot of backlinks, the website revoked from the penalty some where around June'14. Most of the secondary and longtail Keywords started ranking in Google, but unfortunately, it do not rank well for the primary keywords like (live chat, live chat software, helpdesk etc). Since I have done lot of onsite changes and even revamped the content but till now I dont find any improvement. I am unable to understand where I have got structed. Intermediate & Advanced SEO | | sandeep.clickdesk
 can anyone help me out?0
- 
		
		
		
		
		
		Is there a way to get a list of Total Indexed pages from Google Webmaster Tools?
 I'm doing a detailed analysis of how Google sees and indexes our website and we have found that there are 240,256 pages in the index which is way too many. It's an e-commerce site that needs some tidying up. I'm working with an SEO specialist to set up URL parameters and put information in to the robots.txt file so the excess pages aren't indexed (we shouldn't have any more than around 3,00 - 4,000 pages) but we're struggling to find a way to get a list of these 240,256 pages as it would be helpful information in deciding what to put in the robots.txt file and which URL's we should ask Google to remove. Is there a way to get a list of the URL's indexed? We can't find it in the Google Webmaster Tools. Intermediate & Advanced SEO | | sparrowdog0
- 
		
		
		
		
		
		How is Google crawling and indexing this directory listing?
 We have three Directory Listing pages that are being indexed by Google: http://www.ccisolutions.com/StoreFront/jsp/ http://www.ccisolutions.com/StoreFront/jsp/html/ http://www.ccisolutions.com/StoreFront/jsp/pdf/ How and why is Googlebot crawling and indexing these pages? Nothing else links to them (although the /jsp.html/ and /jsp/pdf/ both link back to /jsp/). They aren't disallowed in our robots.txt file and I understand that this could be why. If we add them to our robots.txt file and disallow, will this prevent Googlebot from crawling and indexing those Directory Listing pages without prohibiting them from crawling and indexing the content that resides there which is used to populate pages on our site? Having these pages indexed in Google is causing a myriad of issues, not the least of which is duplicate content. For example, this file <tt>CCI-SALES-STAFF.HTML</tt> (which appears on this Directory Listing referenced above - http://www.ccisolutions.com/StoreFront/jsp/html/) clicks through to this Web page: http://www.ccisolutions.com/StoreFront/jsp/html/CCI-SALES-STAFF.HTML This page is indexed in Google and we don't want it to be. But so is the actual page where we intended the content contained in that file to display: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff As you can see, this results in duplicate content problems. Is there a way to disallow Googlebot from crawling that Directory Listing page, and, provided that we have this URL in our sitemap: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff, solve the duplicate content issue as a result? For example: Disallow: /StoreFront/jsp/ Disallow: /StoreFront/jsp/html/ Disallow: /StoreFront/jsp/pdf/ Can we do this without risking blocking Googlebot from content we do want crawled and indexed? Many thanks in advance for any and all help on this one! Intermediate & Advanced SEO | | danatanseo0
- 
		
		
		
		
		
		Best way to permanently remove URLs from the Google index?
 We have several subdomains we use for testing applications. Even if we block with robots.txt, these subdomains still appear to get indexed (though they show as blocked by robots.txt. I've claimed these subdomains and requested permanent removal, but it appears that after a certain time period (6 months)? Google will re-index (and mark them as blocked by robots.txt). What is the best way to permanently remove these from the index? We can't use login to block because our clients want to be able to view these applications without needing to login. What is the next best solution? Intermediate & Advanced SEO | | nicole.healthline0
- 
		
		
		
		
		
		How to get content to index faster in Google.....pubsubhubbub?
 I'm curious to know what tools others are using to get their content to index faster (other than html sitmap and pingomatic, twitter, etc) Would installing the wordpress pubsubhubbub plugin help even though it uses pingomatic? http://wordpress.org/extend/plugins/pubsubhubbub/ Intermediate & Advanced SEO | | webestate0
 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				