Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Will Google crawl and rank our ReactJS website content?
-
We have 250+ products dynamically inserted and sorted on our site daily (more specifically our homepage... yes, it's a long page). Our dev team would like to explore rendering the page server-side using ReactJS.
We currently use a CDN to cache all the content, which of course we would like to continue using.
SO... will Google be able to crawl that content?
We've read some articles with different ideas (including prerendering):
http://andrewhfarmer.com/react-seo/
http://www.seoskeptic.com/json-ld-big-day-at-google/If we were to only load the schema important to the page (like product title, image, price, description, etc.) from the server and then let the client render the remaining content (comments, suggested products, etc.), would that go against best practices? It seems like that might be seen as showing the googlebot 1 version and showing the site visitor a different (more complete) version.
-
What exactly are you planning to render server-side? In principle, you shouldn't have anything to worry about if you render everything server-side, provided the rendering isn't so slow that it affects Google's measures of page speed.
What do you see when you use the 'Fetch and Render' feature in Search Console at present?
-
Google does crawl JavaScript, and they do index it. Googlebot is really a form of the Chrome web browser, so they will see the information that you give to them and most likely the other remaining content. Keep in mind that cloaking is against their guidelines, so may get the site penalized.
I would go ahead and give Google and all visitors the full content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why Google de-rank a website.
Hi, I was inspecting a website which is covering the topic of best wheelbarrow of 2021, it is a new website and and starts ranking on google. But, after few days it got de-rank automatically and Moz is also not showing any result to that. I was wandering why this just happened and what should I do if I made my website and will not face this kind of situation?
Technical SEO | | Moeen22330 -
Why can't google mobile friendly test access my website?
getting the following error when trying to use google mobile friendly tool: "page cannot be reached. This could be because the page is unavailable or blocked by robots.txt" I don't have anything blocked by robots.txt or robots tag. i also manage to render my pages on google search console's fetch and render....so what can be the reason that the tool can't access my website? Also...the mobile usability report on the search console works but reports very little, and the google speed test also doesnt work... Any ideas to what is the reason and how to fix this? LEARN MOREDetailsUser agentGooglebot smartphone
Technical SEO | | Nadav_W0 -
Ranking penalty for "accordion" content -- hidden prior to user interaction
Will content inside an "accordion" module be ranked as non-hidden content? Is there an official guide by google and other search engines addressing this? Example of accordion element: https://v4-alpha.getbootstrap.com/components/collapse/#accordion-example Will all elements in the example above be seen + treated equally by search engines?
Technical SEO | | houlihanlokey1 -
How does Google treat Content hidden in click-to-expand tabs?
Hi Peeps I'm working a web build project and having some debates going on with our UX and SEO department regards hidden content in click-to-expand tabs. The UX team is suggesting using these tabs is a legitimate method of making large amounts of copy more easily digestible to readers. The tabs are for FAQs ( hopefully, you can view the wireframe URL ) and the SEO team are concerned that the content in these tabs contains some core keyword phrases which may not be indexed. I am the project lead on this and honestly can't claim to be an expert on either discipline so any advice would be very welcome. Can search engines index content hidden in these tabs? Thank you in advance for any advice shared. Nicky 213985904
Technical SEO | | nickspiteri0 -
Blocked URL parameters can still be crawled and indexed by google?
Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...
Technical SEO | | catalinmoraru0 -
Sites Copying my Content Ranking Higher
A number of sites are copying - either 100% word for word, paragraphs, or sentences of my content and are ranking higher. Some sites are doing this with permission/properly and are linking back to my article Others are not linking back or giving credit. Some of these sites, in some cases are ranking higher than me in Google results. What can I do?
Technical SEO | | ben10000 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0 -
Do search engines still index/crawl private content?
If you have a membership site, which requires a payment to access specific content/images/videos, do search engines still use that content as a ranking/domain authority factor? Is it worth optimizing these "private" pages for SEO?
Technical SEO | | christinarule1