Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do you check the google cache for hashbang pages?
-
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage
That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage.
My question is - how can you actually look up the cache for hashbang page?
-
I was actually trying to give you the tools to figure out what's cached and indexed. You can just run a site search for the content and look at the cache, though. For example:
If nothing shows up it's probably not indexed.
-
Thanks Carson but that wasn't the question.
The question was how to check the cache.
-
Generally I'd avoid hashtags or hashbangs if you have large amounts of content you want indexed behind a hashbang. Use pushState instead whenever it makes sense for the user to actually change the URL.
The general rule is that if you can see the content in your page source (ctrl+u version), it's probably being indexed. That means that client-side AJAX behind hashbangs is generally not indexed, where server-side will generally get indexed.
If for some reason you must use hashbangs, AND you must use client-rendering content, create an HTML snapshot of your page for Google. Generally, though, that's more effort than changing one of the above.
-
I think google has stopped responding to cache requests on hashbang pages all together.
See here... **I'm just playing with random urls and don't see google cache 404'ing as it should **http://recordit.co/XBlo3U2A73
You can really put anything there it won't work.
-
Searching for indexed & duplicate content. I put a line or two in quotes and Googled it. I found most of the UTMs that way. Once you do that, it's a simple change to site:yoursite.com inurl:UTM
-
Thanks a lot, Matt.
I'm curious.. how did you exactly find the version with the utm codes that are being cached?
-
Strangely, browseo sees it correctly: http://www.browseo.net/?url=https%3A%2F%2Fplaceit.net%2F%3F_escaped_fragment_%3D%2Fstages%2Fsamsung-galaxy-note-friends-park
I'm not 100% sure why this is happening on your site specifically. Normally the #! isn't too big of an issue for cache but I've seen it have a few hiccups. These pages seem to be indexed fine but they aren't generating cache.
I did find a few working but only those with UTM codes:
This doesn't look like it's working but view the source code - the content is actually there. I found it by Googling the content in " marks.
-
What you're saying make sense and our urls are setup like this but we still don't see just the homepage come up when looking up the google cache with the esc fragment version
http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
homepage - http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=
-
Let's use a Wix example site (not a client, just a sample from their page) as my example. Say you wanted to check:
http://www.kingskolacheny.com/#!press/crr2
In the source code I see the escaped fragment URL. This is the one you can find a cache for:
http://www.kingskolacheny.com/?escaped_fragment=press/crr2
That leads me to: http://webcache.googleusercontent.com/search?q=cache:http://www.kingskolacheny.com/?escaped_fragment=press/crr2
If your #! URLs are not setup this way, you will struggle to see it. One page websites are ... one page. But if you have escaped fragment URLs setup, you should be able to submit those and go from there.
The easiest way I know to find these is Screaming Frog, Ajax tab, Ugly URL field - try that one.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
Google Adsbot crawling order confirmation pages?
Hi, We have had roughly 1000+ requests per 24 hours from Google-adsbot to our confirmation pages. This generates an error as the confirmation page cannot be viewed after closing or by anyone who didn't complete the order. How is google-adsbot finding pages to crawl that are not linked to anywhere on the site, in the sitemap or linked to anywhere else? Is there any harm in a google crawler receiving a higher percentage of errors - even though the pages are not supposed to be requested. Is there anything we can do to prevent the errors for the benefit of our network team and what are the possible risks of any measures we can take? This bot seems to be for evaluating the quality of landing pages used in for Adwords so why is it trying to access confirmation pages when they have not been set for any of our adverts? We included "Disallow: /confirmation" in the robots.txt but it has continued to request these pages, generating a 403 page and an error in the log files so it seems Adsbot doesn't follow robots.txt. Thanks in advance for any help, Sam
Intermediate & Advanced SEO | | seoeuroflorist0 -
Google cache is showing my UK homepage site instead of the US homepage and ranking the UK site in US
Hi There, When I check the cache of the US website (www.us.allsaints.com) Google returns the UK website. This is also reflected in the US Google Search Results when the UK site ranks for our brand name instead of the US site. The homepage has hreflang tags only on the homepage and the domains have been pointed correctly to the right territories via Google Webmaster Console.This has happened before in 26th July 2015 and was wondering if any had any idea why this is happening or if any one has experienced the same issueFDGjldR
Intermediate & Advanced SEO | | adzhass0 -
Location Pages On Website vs Landing pages
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice.
Intermediate & Advanced SEO | | KJ-Rodgers0 -
Does Google give weight to the default measurement units (metric / imperial) on pages?
Hi, We run a series of weather websites that cater for the units (feet, metres, Celsius, Fahrenheight etc.) for the users by means of detecting their geo-location. So users in the US see the site in feet, Fahrenheight and pretty much the rest of the world gets metric units. My concern is that if we view the cached version of our pages as seen by the Googlebot out of Mountain View, California, it shows that our geoIP switch to imperial units has been activated for every location in the World. The question is, does the fact that we appear to cater for countries who use metric units by showing (in Google's eyes) Imperial units by default count against us from an SEO point of view? Thanks in advance for any comments, Nick
Intermediate & Advanced SEO | | nickruss0 -
Putting "noindex" on a page that's in an iframe... what will that mean for the parent page?
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Intermediate & Advanced SEO | | Philip-DiPatrizio0 -
Dynamic pages - ecommerce product pages
Hi guys, Before I dive into my question, let me give you some background.. I manage an ecommerce site and we're got thousands of product pages. The pages contain dynamic blocks and information in these blocks are fed by another system. So in a nutshell, our product team enters the data in a software and boom, the information is generated in these page blocks. But that's not all, these pages then redirect to a duplicate version with a custom URL. This is cached and this is what the end user sees. This was done to speed up load, rather than the system generate a dynamic page on the fly, the cache page is loaded and the user sees it super fast. Another benefit happened as well, after going live with the cached pages, they started getting indexed and ranking in Google. The problem is that, the redirect to the duplicate cached page isn't a permanent one, it's a meta refresh, a 302 that happens in a second. So yeah, I've got 302s kicking about. The development team can set up 301 but then there won't be any caching, pages will just load dynamically. Google records pages that are cached but does it cache a dynamic page though? Without a cached page, I'm wondering if I would drop in traffic. The view source might just show a list of dynamic blocks, no content! How would you tackle this? I've already setup canonical tags on the cached pages but removing cache.. Thanks
Intermediate & Advanced SEO | | Bio-RadAbs0 -
301 - should I redirect entire domain or page for page?
Hi, We recently enabled a 301 on our domain from our old website to our new website. On the advice of fellow mozzer's we copied the old site exactly to the new domain, then did the 301 so that the sites are identical. Question is, should we be doing the 301 as a whole domain redirect, i.e. www.oldsite.com is now > www.newsite.com, or individually setting each page, i.e. www.oldsite.com/page1 is now www.newsite.com/page1 etc for each page in our site? Remembering that both old and new sites (for now) are identical copies. Also we set the 301 about 5 days ago and have verified its working but haven't seen a single change in rank either from the old site or new - is this because Google hasn't likely re-indexed yet? Thanks, Anthony
Intermediate & Advanced SEO | | Grenadi0