Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do you check the google cache for hashbang pages?
- 
					
					
					
					
 So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage. My question is - how can you actually look up the cache for hashbang page? 
- 
					
					
					
					
 I was actually trying to give you the tools to figure out what's cached and indexed. You can just run a site search for the content and look at the cache, though. For example: If nothing shows up it's probably not indexed. 
- 
					
					
					
					
 Thanks Carson but that wasn't the question. The question was how to check the cache. 
- 
					
					
					
					
 Generally I'd avoid hashtags or hashbangs if you have large amounts of content you want indexed behind a hashbang. Use pushState instead whenever it makes sense for the user to actually change the URL. The general rule is that if you can see the content in your page source (ctrl+u version), it's probably being indexed. That means that client-side AJAX behind hashbangs is generally not indexed, where server-side will generally get indexed. If for some reason you must use hashbangs, AND you must use client-rendering content, create an HTML snapshot of your page for Google. Generally, though, that's more effort than changing one of the above. 
- 
					
					
					
					
 I think google has stopped responding to cache requests on hashbang pages all together. See here... **I'm just playing with random urls and don't see google cache 404'ing as it should **http://recordit.co/XBlo3U2A73 You can really put anything there it won't work. 
- 
					
					
					
					
 Searching for indexed & duplicate content. I put a line or two in quotes and Googled it. I found most of the UTMs that way. Once you do that, it's a simple change to site:yoursite.com inurl:UTM 
- 
					
					
					
					
 Thanks a lot, Matt. I'm curious.. how did you exactly find the version with the utm codes that are being cached? 
- 
					
					
					
					
 Strangely, browseo sees it correctly: http://www.browseo.net/?url=https%3A%2F%2Fplaceit.net%2F%3F_escaped_fragment_%3D%2Fstages%2Fsamsung-galaxy-note-friends-park I'm not 100% sure why this is happening on your site specifically. Normally the #! isn't too big of an issue for cache but I've seen it have a few hiccups. These pages seem to be indexed fine but they aren't generating cache. I did find a few working but only those with UTM codes: This doesn't look like it's working but view the source code - the content is actually there. I found it by Googling the content in " marks. 
- 
					
					
					
					
 What you're saying make sense and our urls are setup like this but we still don't see just the homepage come up when looking up the google cache with the esc fragment version http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park homepage - http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment= 
- 
					
					
					
					
 Let's use a Wix example site (not a client, just a sample from their page) as my example. Say you wanted to check: http://www.kingskolacheny.com/#!press/crr2 In the source code I see the escaped fragment URL. This is the one you can find a cache for: http://www.kingskolacheny.com/?escaped_fragment=press/crr2 That leads me to: http://webcache.googleusercontent.com/search?q=cache:http://www.kingskolacheny.com/?escaped_fragment=press/crr2 If your #! URLs are not setup this way, you will struggle to see it. One page websites are ... one page. But if you have escaped fragment URLs setup, you should be able to submit those and go from there. The easiest way I know to find these is Screaming Frog, Ajax tab, Ugly URL field - try that one. 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		URL structure - Page Path vs No Page Path
 We are currently re building our URL structure for eccomerce websites. We have seen a lot of site removing the page path on product pages e.g. https://www.theiconic.co.nz/liberty-beach-blossom-shirt-680193.html versus what would normally be https://www.theiconic.co.nz/womens-clothing-tops/liberty-beach-blossom-shirt-680193.html Should we be removing the site page path for a product page to keep the url shorter or should we keep it? I can see that we would loose the hierarchy juice to a product page but not sure what is the right thing to do. Intermediate & Advanced SEO | | Ashcastle0
- 
		
		
		
		
		
		Why is Google ranking irrelevant / not preferred pages for keywords?
 Over the past few months we have been chipping away at duplicate content issues. We know this is our biggest issue and is working against us. However, it is due to this client also owning the competitor site. Therefore, product merchandise and top level categories are highly similar, including a shared server. Our rank is suffering major for this, which we understand. However, as we make changes, and I track and perform test searches, the pages that Google ranks for keywords never seems to match or make sense, at all. For example, I search for "solid scrub tops" and it ranks the "print scrub tops" category. Or the "Men Clearance" page is ranking for keyword "Women Scrub Pants". Or, I will search for a specific brand, and it ranks a completely different brand. Has anyone else seen this behavior with duplicate content issues? Or is it an issue with some other penalty? At this point, our only option is to test something and see what impact it has, but it is difficult to do when keywords do not align with content. Intermediate & Advanced SEO | | lunavista-comm0
- 
		
		
		
		
		
		Google cache is showing my UK homepage site instead of the US homepage and ranking the UK site in US
 Hi There, When I check the cache of the US website (www.us.allsaints.com) Google returns the UK website. This is also reflected in the US Google Search Results when the UK site ranks for our brand name instead of the US site. The homepage has hreflang tags only on the homepage and the domains have been pointed correctly to the right territories via Google Webmaster Console.This has happened before in 26th July 2015 and was wondering if any had any idea why this is happening or if any one has experienced the same issueFDGjldR Intermediate & Advanced SEO | | adzhass0
- 
		
		
		
		
		
		Location Pages On Website vs Landing pages
 We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice. Intermediate & Advanced SEO | | KJ-Rodgers0
- 
		
		
		
		
		
		Does Google give weight to the default measurement units (metric / imperial) on pages?
 Hi, We run a series of weather websites that cater for the units (feet, metres, Celsius, Fahrenheight etc.) for the users by means of detecting their geo-location. So users in the US see the site in feet, Fahrenheight and pretty much the rest of the world gets metric units. My concern is that if we view the cached version of our pages as seen by the Googlebot out of Mountain View, California, it shows that our geoIP switch to imperial units has been activated for every location in the World. The question is, does the fact that we appear to cater for countries who use metric units by showing (in Google's eyes) Imperial units by default count against us from an SEO point of view? Thanks in advance for any comments, Nick Intermediate & Advanced SEO | | nickruss0
- 
		
		
		
		
		
		Date of page first indexed or age of a page?
 Hi does anyone know any ways, tools to find when a page was first indexed/cached by Google? I remember a while back, around 2009 i had a firefox plugin which could check this, and gave you a exact date. Maybe this has changed since. I don't remember the plugin. Or any recommendations on finding the age of a page (not domain) for a website? This is for competitor research not my own website. Cheers, Paul Intermediate & Advanced SEO | | MBASydney0
- 
		
		
		
		
		
		Can too many "noindex" pages compared to "index" pages be a problem?
 Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio Intermediate & Advanced SEO | | fablau0
- 
		
		
		
		
		
		How to find all indexed pages in Google?
 Hi, We have an ecommerce site with around 4000 real pages. But our index count is at 47,000 pages in Google Webmaster Tools. How can I get a list of all pages indexed of our domain? trying to locate the duplicate content. Doing a "site:www.mydomain.com" only returns up to 676 results... Any ideas? Thanks, Ben Intermediate & Advanced SEO | | bjs20100
 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				