Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Lazy Loading of Blog Posts and Crawl Depths
-
Hi Moz Fans,
We are looking at our blog and improving the content as much as we can for SEO purposes, but we have hit a bit of a blank in terms of lazy loading implications and issues with crawl depths.
We introduced lazy loading onto the blog home page to increase site speed initially and it works well with infinite scroll, but we were wondering whether this would cause any issues regarding SEO.
A lot of the resources online seem to be conflicting and some are very outdated, so some clarification on what is best in terms of lazy loading and crawl depths for blogs, would be fantastic!
I hope someone can help and give us some up to date insights - If you need anymore information, I'll reply ASAP
-
This is fantastic - Thank you!
-
Lazy load and infinite scroll are absolutely not the same thing, as far as search crawlers are concerned.
Lazy-loaded content, if it exists in the dom of the page will be indexed but it's importance will likely be reduced (any content that requires user interaction to see is reduced in ranking value).
But because infinite scroll is unmanageable for the crawler (it's not going to stay on one page and keep crawling for hours as every blog post rolls into view) Google's John Mueller has said the crawler will simply stop at the bottom of the initial page load.
This webinar/discussion on crawl and rendering from just last week included G's John Mueller and a Google engineer and will give you exactly the info you're looking for, right from the horse's mouth, Victoria.
To consider though - the blog's index page shouldn't be the primary source for the blog's content anyway - the individual permalinked post URLs are what should be crawled and ranking for the individual post content. And the xml sitemap should be the primary source for google's discovery of those URLs. Though obviously linking from authoritative pages will help the posts, but that's going to change every time the blog index page updates anyway. Also, did you know that you can submit the blog's RSS feed as a sitemap in addition to the xml sitemap? It's the fastest way I've found of getting new blog posts crawled/indexed.
Hope that helps!
Paul
-
I'm afraid I don't have an insight into how Google crawls with lazy loading.
Which works better for your user, pagination or lazy loading? I wouldn't worry about lazy loading and Google. If you're worried about getting pages indexed then I would make sure you've got a sitemap that works correctly.
-
Great, thank you
Do you have any insight into crawl depth too?
At what point would Google stop crawling the page with lazy loading? Is it best to use pagination as opposed to infinite scroll? -
With lazy loading, the code can actually still be seen in the source code. That's what Google uses, so you should be fine with using this as it's becoming a common practice now.
-
Yes, it's similar to the BBC page and loads when it is needed by the user so to speak.
It increased the site loading, but do you know at what point Google would stop indexing the content on our site?
How do we ensure that the posts are being crawled and is pagination the best way to go?
-
I'd have to say, not too familiar with the method you are using, but I take it the idea is elements of the page load as you scroll like BBC?
If it decreases the load time of the site that is good for both direct and indirect SEO, But the key thing is can Google see the contents of the page or not? - Use Google Search Console and fetch the page to see if it contains the content.
Also, Google will not hang around on your site, if it doesn't serve the content within a reasonable amount of time it will bounce off to the next page, or the next site to crawl. It's harsh, but it's a fact.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to create a smooth blog migration from subdomain to subfolder main?
Hi mozzers, We have decided to migrate the blog subdomain to the domain's subfolder (blog.example.com to example.com/blog). To do this the most effective way and avoid impact SEO negatively I believe I have to follow this checklist: Create a list of all 301 redirects from blog.example.com/post-1 to example.com/post-1 Make sure title tags remain the same on main domain Make sure internal links remain the same Is there something else I am missing? Any other best practices? I also would like to have all blog post as AMPs. Any recommendations if this something we should do since we are not a media site? Any other tips on successfully implementing those types of pages? Thanks
Intermediate & Advanced SEO | | Ty19861 -
Pagination loading with using AJAX. Should I change this?
Hello, while I was checking this site; http://www.disfracessimon.com/disfraces-adultos-16.html I found that the pagination is working this way http://www.disfracessimon.com/disfraces-adultos-16.html#/page-2
Intermediate & Advanced SEO | | teconsite
http://www.disfracessimon.com/disfraces-adultos-16.html#/page-3 and content is being loaded using AJAX. So, google is not getting the paginated results. Is this a big issue or there is no problem?
Should I create a link for See All Products or there is not a big issue? Thank you!0 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
How reliable is the link depth info from Xenu?
Hi everyone! I searched existing Q & A and couldn't find an answer to this question. Here is the scenario: The site is: http://www.ccisolutions.com I am seeing instances of category pages being identified as 8 levels deep. For example, this one: http://www.ccisolutions.com/StoreFront/category/B8I This URL redirects to http://www.ccisolutions.com/StoreFront/category/headphones - which Xenu identifies as being only 1 level deep. Xenu does not seem to be recognizing that the first URL 301-redirects to the second. Is this normal for the way Xenu typically reports? If so, why is the first URL indicated to be so much further down in the structure? Is this an indication of site architecture problems? Or is it an indication of problems with how our 301-redirects are being handled? Both? Thanks in advance for your thoughts!
Intermediate & Advanced SEO | | danatanseo0 -
Why is my Crawl Report Showing Thousands of Pages that Do Not Exist?
Hi, I just downloaded a Crawl Summary Report for a client's website. I am seeing THOUSANDS of duplicate page content errors. The overwhelming majority of them look something like this: ERROR: http://www.earlyinterventionsupport.com/resources/parentingtips/development/parentingtips/development/development/development/development/development/development/parentingtips/specialneeds/default.aspx This page doesn't exist and results in a 404 page. Why are these pages showing up? How do I get rid of them? Are they endangering the health of my site as a whole? Thank you, Jenna <colgroup><col width="1051"></colgroup>
Intermediate & Advanced SEO | | JennaCMag
| |0